modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-uk-cs | 14e3fd5d67d28b3f6120187ea59a757ff6aff481 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"cs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-cs | 4 | null | transformers | 17,900 | ---
language:
- uk
- cs
tags:
- translation
license: apache-2.0
---
### ukr-ces
* source group: Ukrainian
* target group: Czech
* OPUS readme: [ukr-ces](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ces/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): ces
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.ces | 52.0 | 0.686 |
### System Info:
- hf_name: ukr-ces
- source_languages: ukr
- target_languages: ces
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ces/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'cs']
- src_constituents: {'ukr'}
- tgt_constituents: {'ces'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: ces
- short_pair: uk-cs
- chrF2_score: 0.6859999999999999
- bleu: 52.0
- brevity_penalty: 0.993
- ref_len: 8550.0
- src_name: Ukrainian
- tgt_name: Czech
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: cs
- prefer_old: False
- long_pair: ukr-ces
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-uk-it | e8acd72aa6483a93662be04b9a2b57b06fb6f0f5 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-it | 4 | null | transformers | 17,901 | ---
language:
- uk
- it
tags:
- translation
license: apache-2.0
---
### ukr-ita
* source group: Ukrainian
* target group: Italian
* OPUS readme: [ukr-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ita/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.ita | 46.0 | 0.662 |
### System Info:
- hf_name: ukr-ita
- source_languages: ukr
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'it']
- src_constituents: {'ukr'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: ita
- short_pair: uk-it
- chrF2_score: 0.662
- bleu: 46.0
- brevity_penalty: 0.9490000000000001
- ref_len: 27846.0
- src_name: Ukrainian
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: it
- prefer_old: False
- long_pair: ukr-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-uk-sh | 39812bd6b61825901cf080bf72d8ed38a85ccc30 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"sh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-sh | 4 | null | transformers | 17,902 | ---
language:
- uk
- sh
tags:
- translation
license: apache-2.0
---
### ukr-hbs
* source group: Ukrainian
* target group: Serbo-Croatian
* OPUS readme: [ukr-hbs](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hbs/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): hrv srp_Cyrl srp_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.hbs | 42.8 | 0.631 |
### System Info:
- hf_name: ukr-hbs
- source_languages: ukr
- target_languages: hbs
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hbs/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'sh']
- src_constituents: {'ukr'}
- tgt_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: hbs
- short_pair: uk-sh
- chrF2_score: 0.631
- bleu: 42.8
- brevity_penalty: 0.96
- ref_len: 5128.0
- src_name: Ukrainian
- tgt_name: Serbo-Croatian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: sh
- prefer_old: False
- long_pair: ukr-hbs
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-vsl-es | 824012028f3564c3412baff60ac8a0b00837c3a2 | 2021-09-11T10:51:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"vsl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-vsl-es | 4 | null | transformers | 17,903 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-vsl-es
* source languages: vsl
* target languages: es
* OPUS readme: [vsl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/vsl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.vsl.es | 91.9 | 0.944 |
|
Helsinki-NLP/opus-mt-war-fi | 7e6df2553403fbdc55bfbcb4955223dbeac0b792 | 2021-09-11T10:51:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"war",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-war-fi | 4 | null | transformers | 17,904 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-war-fi
* source languages: war
* target languages: fi
* OPUS readme: [war-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.fi | 26.9 | 0.507 |
|
Helsinki-NLP/opus-mt-zle-zle | 456d0a26de8553aed16380883b032a5391f10a31 | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"ru",
"uk",
"zle",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zle-zle | 4 | null | transformers | 17,905 | ---
language:
- be
- ru
- uk
- zle
tags:
- translation
license: apache-2.0
---
### zle-zle
* source group: East Slavic languages
* target group: East Slavic languages
* OPUS readme: [zle-zle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zle/README.md)
* model: transformer
* source language(s): bel bel_Latn orv_Cyrl rus ukr
* target language(s): bel bel_Latn orv_Cyrl rus ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bel-rus.bel.rus | 57.1 | 0.758 |
| Tatoeba-test.bel-ukr.bel.ukr | 55.5 | 0.751 |
| Tatoeba-test.multi.multi | 58.0 | 0.742 |
| Tatoeba-test.orv-rus.orv.rus | 5.8 | 0.226 |
| Tatoeba-test.orv-ukr.orv.ukr | 2.5 | 0.161 |
| Tatoeba-test.rus-bel.rus.bel | 50.5 | 0.714 |
| Tatoeba-test.rus-orv.rus.orv | 0.3 | 0.129 |
| Tatoeba-test.rus-ukr.rus.ukr | 63.9 | 0.794 |
| Tatoeba-test.ukr-bel.ukr.bel | 51.3 | 0.719 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.3 | 0.106 |
| Tatoeba-test.ukr-rus.ukr.rus | 68.7 | 0.825 |
### System Info:
- hf_name: zle-zle
- source_languages: zle
- target_languages: zle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'ru', 'uk', 'zle']
- src_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- tgt_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.test.txt
- src_alpha3: zle
- tgt_alpha3: zle
- short_pair: zle-zle
- chrF2_score: 0.742
- bleu: 58.0
- brevity_penalty: 1.0
- ref_len: 62731.0
- src_name: East Slavic languages
- tgt_name: East Slavic languages
- train_date: 2020-07-27
- src_alpha2: zle
- tgt_alpha2: zle
- prefer_old: False
- long_pair: zle-zle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-zls-en | b1bf0b6fad1277b30ab93c90cd884122990ba283 | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hr",
"mk",
"bg",
"sl",
"zls",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zls-en | 4 | null | transformers | 17,906 | ---
language:
- hr
- mk
- bg
- sl
- zls
- en
tags:
- translation
license: apache-2.0
---
### zls-eng
* source group: South Slavic languages
* target group: English
* OPUS readme: [zls-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-eng/README.md)
* model: transformer
* source language(s): bos_Latn bul bul_Latn hrv mkd slv srp_Cyrl srp_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul-eng.bul.eng | 54.9 | 0.693 |
| Tatoeba-test.hbs-eng.hbs.eng | 55.7 | 0.700 |
| Tatoeba-test.mkd-eng.mkd.eng | 54.6 | 0.681 |
| Tatoeba-test.multi.eng | 53.6 | 0.676 |
| Tatoeba-test.slv-eng.slv.eng | 25.6 | 0.407 |
### System Info:
- hf_name: zls-eng
- source_languages: zls
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hr', 'mk', 'bg', 'sl', 'zls', 'en']
- src_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zls
- tgt_alpha3: eng
- short_pair: zls-en
- chrF2_score: 0.6759999999999999
- bleu: 53.6
- brevity_penalty: 0.98
- ref_len: 68623.0
- src_name: South Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zls
- tgt_alpha2: en
- prefer_old: False
- long_pair: zls-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-zlw-zlw | 1206a7ac864845daec84450b1af7539c8f50728f | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pl",
"cs",
"zlw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zlw-zlw | 4 | null | transformers | 17,907 | ---
language:
- pl
- cs
- zlw
tags:
- translation
license: apache-2.0
---
### zlw-zlw
* source group: West Slavic languages
* target group: West Slavic languages
* OPUS readme: [zlw-zlw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-zlw/README.md)
* model: transformer
* source language(s): ces dsb hsb pol
* target language(s): ces dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ces-hsb.ces.hsb | 2.6 | 0.167 |
| Tatoeba-test.ces-pol.ces.pol | 44.0 | 0.649 |
| Tatoeba-test.dsb-pol.dsb.pol | 8.5 | 0.250 |
| Tatoeba-test.hsb-ces.hsb.ces | 9.6 | 0.276 |
| Tatoeba-test.multi.multi | 38.8 | 0.580 |
| Tatoeba-test.pol-ces.pol.ces | 43.4 | 0.620 |
| Tatoeba-test.pol-dsb.pol.dsb | 2.1 | 0.159 |
### System Info:
- hf_name: zlw-zlw
- source_languages: zlw
- target_languages: zlw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-zlw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'cs', 'zlw']
- src_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- tgt_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.test.txt
- src_alpha3: zlw
- tgt_alpha3: zlw
- short_pair: zlw-zlw
- chrF2_score: 0.58
- bleu: 38.8
- brevity_penalty: 0.99
- ref_len: 7792.0
- src_name: West Slavic languages
- tgt_name: West Slavic languages
- train_date: 2020-07-27
- src_alpha2: zlw
- tgt_alpha2: zlw
- prefer_old: False
- long_pair: zlw-zlw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-zne-es | 60f3fb6d2190c11bc0e4de2e54db15778459b952 | 2021-09-11T10:53:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zne",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zne-es | 4 | null | transformers | 17,908 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-zne-es
* source languages: zne
* target languages: es
* OPUS readme: [zne-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.es | 21.1 | 0.382 |
|
Helsinki-NLP/opus-tatoeba-de-ro | 052c3193024b2ac0a6885b3c58b84f2ad0cade71 | 2021-11-08T14:45:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"ro",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-de-ro | 4 | null | transformers | 17,909 | ---
language:
- de
- ro
tags:
- translation
license: apache-2.0
---
### de-ro
* source group: German
* target group: Romanian
* OPUS readme: [deu-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ron/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): mol ron
* raw source language(s): deu
* raw target language(s): mol ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* valid language labels: >>mol<< >>ron<<
* download original weights: [opusTCv20210807-2021-10-22.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.zip)
* test set translations: [opusTCv20210807-2021-10-22.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.test.txt)
* test set scores: [opusTCv20210807-2021-10-22.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test-v2021-08-07.deu-ron | 42.0 | 0.636 | 1141 | 7432 | 0.976 |
### System Info:
- hf_name: de-ro
- source_languages: deu
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ro']
- src_constituents: ('German', {'deu'})
- tgt_constituents: ('Romanian', {'ron'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: deu-ron
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.test.txt
- src_alpha3: deu
- tgt_alpha3: ron
- chrF2_score: 0.636
- bleu: 42.0
- src_name: German
- tgt_name: Romanian
- train_date: 2021-10-22 00:00:00
- src_alpha2: de
- tgt_alpha2: ro
- prefer_old: False
- short_pair: de-ro
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e
- port_machine: LM0-400-22516.local
- port_time: 2021-11-08-16:45 |
Hormigo/roberta-base-bne-finetuned-amazon_reviews_multi | 3a4dcddcadff337f6e080c97bc3b193098eca04e | 2021-08-30T11:08:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0"
] | text-classification | false | Hormigo | null | Hormigo/roberta-base-bne-finetuned-amazon_reviews_multi | 4 | null | transformers | 17,910 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model_index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metric:
name: Accuracy
type: accuracy
value: 0.9335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2275
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1909 | 1.0 | 1250 | 0.1717 | 0.9333 |
| 0.0932 | 2.0 | 2500 | 0.2275 | 0.9335 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Huffon/qnli | ada3b36bf4346b219c49f4445bb7e657db07588d | 2021-07-07T03:26:20.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | Huffon | null | Huffon/qnli | 4 | null | transformers | 17,911 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-iir-v1.0-concept-extraction-kp20k-v1.0 | e846e6cbb5d54c87a34f2bcd26450039ac5a2d90 | 2021-11-12T22:27:04.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-iir-v1.0-concept-extraction-kp20k-v1.0 | 4 | null | transformers | 17,912 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-iir-v1.0-concept-extraction-kp20k-v1.4 | 6d93ce0df14b26d2c2b679739ff20fda795eec2c | 2021-11-19T19:35:11.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-iir-v1.0-concept-extraction-kp20k-v1.4 | 4 | null | transformers | 17,913 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-iir-v1.0 | 8be02de48575667bb59da631af1952ac9f1afbd4 | 2021-11-12T16:42:21.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-iir-v1.0 | 4 | null | transformers | 17,914 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-iir-v1.2-concept-extraction-kp20k-v1.2 | 00ff4ed970c5e24760ec1f05c326a34d47c3d91b | 2021-11-18T12:38:47.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-iir-v1.2-concept-extraction-kp20k-v1.2 | 4 | null | transformers | 17,915 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-iir-v1.2-concept-extraction-kp20k-v1.5 | 4cc3fd91cc082af2a05ab1e3eb985ed21efd6588 | 2021-11-19T22:12:59.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-iir-v1.2-concept-extraction-kp20k-v1.5 | 4 | null | transformers | 17,916 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-iir-v1.2 | 957cf70d86bf14bf9c604efcb342141abfbc4327 | 2021-11-16T03:50:15.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-iir-v1.2 | 4 | null | transformers | 17,917 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.0 | ddd2ae335f0e8b601adb46885463b9ece790f7b9 | 2021-11-12T20:42:10.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.0 | 4 | null | transformers | 17,918 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.2-concept-extraction-allwikipedia-v1.0 | 4329c7829c6973e5933f598d6aaa43c68bdeecb3 | 2022-02-24T07:00:29.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.2-concept-extraction-allwikipedia-v1.0 | 4 | null | transformers | 17,919 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.3 | 273053009835771a582b6dacfb8dc41dbea915de | 2021-11-19T17:24:22.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.3 | 4 | null | transformers | 17,920 | Entry not found |
Ifromspace/GRIEFSOFT-walr | 636a58f1e32ae7d7e8f73639d650ebe3921c0d98 | 2022-01-15T13:07:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"ru",
"4ulan"
] | text-generation | false | Ifromspace | null | Ifromspace/GRIEFSOFT-walr | 4 | 1 | transformers | 17,921 | ---
tags:
- ru
- 4ulan
---
Забавное для дискордика))00)) https://discord.gg/HpeadKH
Offers
[email protected] |
Intel/bert-base-uncased-mnli-sparse-70-unstructured-no-classifier | 4426b158cee5ff33424b66771b5ab90d208fc138 | 2021-06-29T11:14:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Intel | null | Intel/bert-base-uncased-mnli-sparse-70-unstructured-no-classifier | 4 | null | transformers | 17,922 | ---
language: en
---
# Sparse BERT base model fine tuned to MNLI without classifier layer (uncased)
Fine tuned sparse BERT base to MNLI (GLUE Benchmark) task from [bert-base-uncased-sparse-70-unstructured](https://huggingface.co/Intel/bert-base-uncased-sparse-70-unstructured).
<br>
This model doesn't have a classifier layer to enable easier loading of the model for training to other downstream tasks.
In all the other layers this model is similar to [bert-base-uncased-mnli-sparse-70-unstructured](https://huggingface.co/Intel/bert-base-uncased-mnli-sparse-70-unstructured).
<br><br>
Note: This model requires `transformers==2.10.0`
## Evaluation Results
Matched: 82.5%
Mismatched: 83.3%
This model can be further fine-tuned to other tasks and achieve the following evaluation results:
| Task | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) | STS-B (Pears/Spear) | SQuADv1.1 (Acc/F1) |
|------|--------------|------------|-------------|---------------------|--------------------|
| | 90.2/86.7 | 90.3 | 91.5 | 88.9/88.6 | 80.5/88.2 |
|
Iskaj/xlsr300m_cv_7.0_nl_lm | ee65935d6390a13aa28bfacc2a04564dac6ba192 | 2022-03-24T11:54:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Iskaj | null | Iskaj/xlsr300m_cv_7.0_nl_lm | 4 | null | transformers | 17,923 | ---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- nl
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Dutch
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8 NL
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 32
- name: Test CER
type: cer
value: 17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 37.44
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 38.74
---
# xlsr300m_cv_7.0_nl_lm |
ItcastAI/bert_cn_finetunning | 4830cb00612b2990a569d2487d5e4a42b0b3a7f2 | 2021-05-18T21:11:28.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ItcastAI | null | ItcastAI/bert_cn_finetunning | 4 | null | transformers | 17,924 | Entry not found |
JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5 | e4ab44fb69683226c45f06757d8d48b5b00a8521 | 2021-11-05T07:54:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | JazibEijaz | null | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5 | 4 | null | transformers | 17,925 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
name: bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4a-append-e2-b32-l5e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5466
- Accuracy: 0.8890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3057 | 0.8630 |
| 0.4091 | 2.0 | 688 | 0.2964 | 0.8880 |
| 0.1322 | 3.0 | 1032 | 0.4465 | 0.8820 |
| 0.1322 | 4.0 | 1376 | 0.5466 | 0.8890 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5 | b14256e80ffd87f518aa7f97184f087638a6b96f | 2021-11-05T09:00:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | JazibEijaz | null | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5 | 4 | null | transformers | 17,926 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
name: bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b-base-e2-b32-l3e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4114
- Accuracy: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3773 | 0.8490 |
| 0.3812 | 2.0 | 688 | 0.4114 | 0.8700 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Jeevesh8/sMLM-RoBERTa | ee4738bf2bc305b83ae48e9c9d474936f3dd5054 | 2021-11-12T10:34:05.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeevesh8 | null | Jeevesh8/sMLM-RoBERTa | 4 | null | transformers | 17,927 | Entry not found |
JerryQu/v2-distilgpt2 | 911dc712dec69573e745a725706b003f49fc6238 | 2021-05-21T10:52:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | JerryQu | null | JerryQu/v2-distilgpt2 | 4 | null | transformers | 17,928 | Entry not found |
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog02 | e83ff56b673b8db973dcfd4f0fef2e28f48077d2 | 2021-12-28T13:32:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeska | null | Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog02 | 4 | null | transformers | 17,929 | Entry not found |
Jeska/autonlp-vaccinfaq-22144706 | 54721f462da666de7c651e219e9368a203d48971 | 2021-10-19T12:33:52.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:Jeska/autonlp-data-vaccinfaq",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | Jeska | null | Jeska/autonlp-vaccinfaq-22144706 | 4 | null | transformers | 17,930 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Jeska/autonlp-data-vaccinfaq
co2_eq_emissions: 27.135492487925884
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 22144706
- CO2 Emissions (in grams): 27.135492487925884
## Validation Metrics
- Loss: 1.81697416305542
- Accuracy: 0.6377269139700079
- Macro F1: 0.5181293370145044
- Micro F1: 0.6377269139700079
- Weighted F1: 0.631117826235572
- Macro Precision: 0.5371452512845428
- Micro Precision: 0.6377269139700079
- Weighted Precision: 0.6655055695465463
- Macro Recall: 0.5609328178925124
- Micro Recall: 0.6377269139700079
- Weighted Recall: 0.6377269139700079
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Jeska/autonlp-vaccinfaq-22144706
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jeska/autonlp-vaccinfaq-22144706", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jeska/autonlp-vaccinfaq-22144706", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Jihyun22/bert-base-finetuned-nli | dcbed986fb107e51358a24f5bcc45bc22c3fde72 | 2021-10-26T11:07:39.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer"
] | text-classification | false | Jihyun22 | null | Jihyun22/bert-base-finetuned-nli | 4 | 1 | transformers | 17,931 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
model_index:
- name: bert-base-finetuned-nli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: nli
metric:
name: Accuracy
type: accuracy
value: 0.756
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-nli
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1357
- Accuracy: 0.756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.7357 | 0.156 |
| No log | 2.0 | 392 | 0.5952 | 0.0993 |
| 0.543 | 3.0 | 588 | 0.5630 | 0.099 |
| 0.543 | 4.0 | 784 | 0.5670 | 0.079 |
| 0.543 | 5.0 | 980 | 0.5795 | 0.078 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Jitin/manglish | e97ba1aebca3deeafe24403ff0a6d93952ccc721 | 2021-05-20T11:57:45.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jitin | null | Jitin/manglish | 4 | null | transformers | 17,932 | Entry not found |
Josmar/BART_Finetuned_CNN_dailymail | bdd2b34325e3ce9f9db341006be5d23ac07ff316 | 2021-07-23T20:20:30.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Josmar | null | Josmar/BART_Finetuned_CNN_dailymail | 4 | null | transformers | 17,933 | # BART_Finetuned_CNN_dailymail
The following repo contains a [bart-base](https://huggingface.co/facebook/bart-base) model that was finetuned using the dataset [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) |
JovenPai/bert_cn_finetunning | c74bdae8e02047fecba3adf0e7f11f08198bbe1a | 2021-05-18T21:15:39.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | JovenPai | null | JovenPai/bert_cn_finetunning | 4 | null | transformers | 17,934 | Entry not found |
KBLab/megatron-bert-base-swedish-cased-125k | 0943793bb9ded62c36f336c3b2e82ecc3e7dcaf9 | 2022-03-17T11:11:25.000Z | [
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KBLab | null | KBLab/megatron-bert-base-swedish-cased-125k | 4 | null | transformers | 17,935 | ---
language:
- sv
---
# Megatron-BERT-base Swedish 125k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 125k training steps. Its [sister model](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k) used the same setup, but was instead trained for 600k steps.
The model has three sister models trained on the same dataset:
- [🤗 BERT Swedish](https://huggingface.co/KBLab/bert-base-swedish-cased-new)
- [Megatron-BERT-base-600k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k)
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si). |
Kao/samyarn-bert-base-multilingual-cased | 32a56d81aab247a73aa3ad2e0d8c6ad5d85c46d9 | 2021-07-09T08:55:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Kao | null | Kao/samyarn-bert-base-multilingual-cased | 4 | null | transformers | 17,936 | samyarn-bert-base-multilingual-cased
kao |
Katsiaryna/distilbert-base-uncased-finetuned | cbab443ab4b0161f47ed12721c12ca65409672a9 | 2021-12-09T00:20:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Katsiaryna | null | Katsiaryna/distilbert-base-uncased-finetuned | 4 | null | transformers | 17,937 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8229
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.7709 | 0.74 |
| No log | 2.0 | 14 | 0.7048 | 0.72 |
| No log | 3.0 | 21 | 0.8728 | 0.46 |
| No log | 4.0 | 28 | 0.7849 | 0.64 |
| No log | 5.0 | 35 | 0.8229 | 0.54 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Katsiaryna/qnli-electra-base-finetuned_9th_auc_ce | 355b5e946798baa4969a552d2b192ba5b851e3ba | 2021-12-10T11:38:21.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers"
] | text-classification | false | Katsiaryna | null | Katsiaryna/qnli-electra-base-finetuned_9th_auc_ce | 4 | null | transformers | 17,938 | Entry not found |
Katsiaryna/qnli-electra-base-finetuned_9th_auc_ce_diff | a0e376be3e28c24b3bdfc43a59666498e9981880 | 2021-12-10T15:16:29.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers"
] | text-classification | false | Katsiaryna | null | Katsiaryna/qnli-electra-base-finetuned_9th_auc_ce_diff | 4 | null | transformers | 17,939 | Entry not found |
Katsiaryna/qnli-electra-base-finetuned_auc | 3fd219454795371abea08a81648d09d18aa189ca | 2021-12-13T11:10:18.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers"
] | text-classification | false | Katsiaryna | null | Katsiaryna/qnli-electra-base-finetuned_auc | 4 | null | transformers | 17,940 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc | 0990f41de0bd02f67e6c189e0dcf62ead59dcac1 | 2021-12-14T22:20:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc | 4 | null | transformers | 17,941 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top1 | 06d46114c353e18c331ff452e6882417d4a7dcbc | 2021-12-15T19:33:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top1 | 4 | null | transformers | 17,942 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3 | 11be4328009cf1189f24096f1017cb3d408552e9 | 2021-12-15T21:21:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3 | 4 | null | transformers | 17,943 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_161221-top3 | 603407a4c51095a6d0c5d7baf72a97f235f50bed | 2021-12-16T14:20:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_161221-top3 | 4 | null | transformers | 17,944 | Entry not found |
Katsiaryna/stsb-distilroberta-base-finetuned_9th_auc_ce | 597c1036d6ad28c575c3c7f737d76007f5f67b16 | 2021-12-09T21:54:03.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Katsiaryna | null | Katsiaryna/stsb-distilroberta-base-finetuned_9th_auc_ce | 4 | null | transformers | 17,945 | Entry not found |
Kien/distilbert-base-uncased-finetuned-cola | 2199afa156ab7d6891f423feadfa1b2a982ced53 | 2022-01-07T15:00:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Kien | null | Kien/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 17,946 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5232819075279987
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5327
- Matthews Correlation: 0.5233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5314 | 1.0 | 535 | 0.4955 | 0.4270 |
| 0.3545 | 2.0 | 1070 | 0.5327 | 0.5233 |
| 0.2418 | 3.0 | 1605 | 0.6180 | 0.5132 |
| 0.1722 | 4.0 | 2140 | 0.7344 | 0.5158 |
| 0.1243 | 5.0 | 2675 | 0.8581 | 0.5196 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
KoichiYasuoka/SuPar-Kanbun | 3b0aa2e3abd9a55ad363fef4a4fb452c7b5e6e84 | 2022-02-03T09:27:39.000Z | [
"pytorch",
"roberta",
"token-classification",
"lzh",
"dataset:universal_dependencies",
"transformers",
"classical chinese",
"literary chinese",
"ancient chinese",
"pos",
"license:mit",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/SuPar-Kanbun | 4 | null | transformers | 17,947 | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "token-classification"
- "pos"
datasets:
- "universal_dependencies"
license: "mit"
pipeline_tag: "token-classification"
widget:
- text: "不入虎穴不得虎子"
---
[](https://pypi.org/project/suparkanbun/)
# SuPar-Kanbun
Tokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with [spaCy](https://spacy.io), [Transformers](https://huggingface.co/transformers/) and [SuPar](https://github.com/yzhangcs/parser).
## Basic usage
```py
>>> import suparkanbun
>>> nlp=suparkanbun.load()
>>> doc=nlp("不入虎穴不得虎子")
>>> print(type(doc))
<class 'spacy.tokens.doc.Doc'>
>>> print(suparkanbun.to_conllu(doc))
# text = 不入虎穴不得虎子
1 不 不 ADV v,副詞,否定,無界 Polarity=Neg 2 advmod _ Gloss=not|SpaceAfter=No
2 入 入 VERB v,動詞,行為,移動 _ 0 root _ Gloss=enter|SpaceAfter=No
3 虎 虎 NOUN n,名詞,主体,動物 _ 4 nmod _ Gloss=tiger|SpaceAfter=No
4 穴 穴 NOUN n,名詞,固定物,地形 Case=Loc 2 obj _ Gloss=cave|SpaceAfter=No
5 不 不 ADV v,副詞,否定,無界 Polarity=Neg 6 advmod _ Gloss=not|SpaceAfter=No
6 得 得 VERB v,動詞,行為,得失 _ 2 parataxis _ Gloss=get|SpaceAfter=No
7 虎 虎 NOUN n,名詞,主体,動物 _ 8 nmod _ Gloss=tiger|SpaceAfter=No
8 子 子 NOUN n,名詞,人,関係 _ 6 obj _ Gloss=child|SpaceAfter=No
>>> import deplacy
>>> deplacy.render(doc)
不 ADV <════╗ advmod
入 VERB ═══╗═╝═╗ ROOT
虎 NOUN <╗ ║ ║ nmod
穴 NOUN ═╝<╝ ║ obj
不 ADV <════╗ ║ advmod
得 VERB ═══╗═╝<╝ parataxis
虎 NOUN <╗ ║ nmod
子 NOUN ═╝<╝ obj
```
`suparkanbun.load()` has two options `suparkanbun.load(BERT="roberta-classical-chinese-base-char",Danku=False)`. With the option `Danku=True` the pipeline tries to segment sentences automatically. Available `BERT` options are:
* `BERT="roberta-classical-chinese-base-char"` utilizes [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char) (default)
* `BERT="roberta-classical-chinese-large-char"` utilizes [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char)
* `BERT="guwenbert-base"` utilizes [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base)
* `BERT="guwenbert-large"` utilizes [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large)
* `BERT="sikubert"` utilizes [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert)
* `BERT="sikuroberta"` utilizes [SikuRoBERTa](https://huggingface.co/SIKU-BERT/sikuroberta)
## Installation for Linux
```sh
pip3 install suparkanbun --user
```
## Installation for Cygwin64
Make sure to get `python37-devel` `python37-pip` `python37-cython` `python37-numpy` `python37-wheel` `gcc-g++` `mingw64-x86_64-gcc-g++` `git` `curl` `make` `cmake` packages, and then:
```sh
curl -L https://raw.githubusercontent.com/KoichiYasuoka/CygTorch/master/installer/supar.sh | sh
pip3.7 install suparkanbun --no-build-isolation
```
## Installation for Jupyter Notebook (Google Colaboratory)
```py
!pip install suparkanbun
```
Try [notebook](https://colab.research.google.com/github/KoichiYasuoka/SuPar-Kanbun/blob/main/suparkanbun.ipynb) for Google Colaboratory.
## Author
Koichi Yasuoka (安岡孝一)
|
KoichiYasuoka/roberta-base-japanese-char-luw-upos | dac8407867fd2e1e93522a870b7020574c749823 | 2022-06-26T22:56:26.000Z | [
"pytorch",
"roberta",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-japanese-char-luw-upos | 4 | null | transformers | 17,948 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# roberta-base-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-base-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-char-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-japanese-char-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-japanese-char-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
KoichiYasuoka/xlm-roberta-base-english-upos | 733ae80046ed856c8a60c855459578e5bf17d57b | 2022-02-10T15:39:46.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"en",
"dataset:universal_dependencies",
"transformers",
"english",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/xlm-roberta-base-english-upos | 4 | null | transformers | 17,949 | ---
language:
- "en"
tags:
- "english"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# xlm-roberta-base-english-upos
## Model Description
This is an XLM-RoBERTa model pre-trained with [UD_English-EWT](https://github.com/UniversalDependencies/UD_English-EWT) for POS-tagging and dependency-parsing, derived from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/xlm-roberta-base-english-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
Kumicho/distilbert-base-uncased-finetuned-cola | 8f6144c18b296e82ce364e3f857bf54d77f26233 | 2022-02-20T07:17:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Kumicho | null | Kumicho/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 17,950 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5258663312307151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7758
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1926 | 1.0 | 535 | 0.7758 | 0.5259 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Kyoungmin/kcbert-base-petition | 00394077acea90f7e5dc88d3b0a05a7d64e44a19 | 2021-08-22T19:39:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Kyoungmin | null | Kyoungmin/kcbert-base-petition | 4 | null | transformers | 17,951 | This is practice model for kcbert-base with Korean petition data! |
Kyuyoung11/haremotions-v3 | 1fa5c448bbca839e3b4eb9c8656e73381cad87c5 | 2021-08-03T13:27:57.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | Kyuyoung11 | null | Kyuyoung11/haremotions-v3 | 4 | null | transformers | 17,952 | Entry not found |
LilaBoualili/bert-pre-doc | f4c54705917ead3b91dd6d074b53d904a9a7380b | 2021-05-20T09:58:43.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | LilaBoualili | null | LilaBoualili/bert-pre-doc | 4 | null | transformers | 17,953 | Entry not found |
LilaBoualili/bert-pre-pair | 51e34db022db85ed6eb207a2cea88860e35ceee6 | 2021-05-20T09:59:45.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | LilaBoualili | null | LilaBoualili/bert-pre-pair | 4 | null | transformers | 17,954 | Entry not found |
LilaBoualili/electra-pre-doc | ef3ed7e3d255490551454b086022804dbed13265 | 2021-05-18T15:04:09.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
] | text-classification | false | LilaBoualili | null | LilaBoualili/electra-pre-doc | 4 | null | transformers | 17,955 | Entry not found |
LilaBoualili/electra-pre-pair | c550e09aa8f365d7063ddbeac1bad3b454744be6 | 2021-05-18T15:11:59.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
] | text-classification | false | LilaBoualili | null | LilaBoualili/electra-pre-pair | 4 | null | transformers | 17,956 | Entry not found |
LilaBoualili/electra-vanilla | 92aa82dd9aad1480bf5650d7309a03ffb821e1bf | 2021-05-18T14:14:35.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
] | text-classification | false | LilaBoualili | null | LilaBoualili/electra-vanilla | 4 | null | transformers | 17,957 | At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. |
Lumos/imdb3_hga | 0e2f24290d5b703ff3033dea46f553621db3cb95 | 2021-12-22T05:49:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Lumos | null | Lumos/imdb3_hga | 4 | null | transformers | 17,958 | Entry not found |
Lumos/yahoo1 | dc66e90e0ecfc82a9855e5ac89ad1e734f7baa47 | 2021-12-13T12:51:14.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Lumos | null | Lumos/yahoo1 | 4 | null | transformers | 17,959 | Entry not found |
M-FAC/bert-mini-finetuned-qqp | 317db2b9745bf907cc1afe5559609bd6293d6138 | 2021-12-13T08:12:25.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
] | text-classification | false | M-FAC | null | M-FAC/bert-mini-finetuned-qqp | 4 | null | transformers | 17,960 | # BERT-mini model finetuned with M-FAC
This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QQP validation set:
```bash
f1 = 82.98
accuracy = 87.03
```
Mean and standard deviation for 5 runs on QQP validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 82.43 ± 0.10 | 86.45 ± 0.12 |
| M-FAC | 82.67 ± 0.23 | 86.75 ± 0.20 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 10723 \
--model_name_or_path prajjwal1/bert-mini \
--task_name qqp \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
M-FAC/bert-mini-finetuned-squadv2 | de4d617bde35bdfd66f52f3968442e613809f966 | 2021-12-13T08:13:09.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2107.03356",
"transformers",
"autotrain_compatible"
] | question-answering | false | M-FAC | null | M-FAC/bert-mini-finetuned-squadv2 | 4 | null | transformers | 17,961 | # BERT-mini model finetuned with M-FAC
This model is finetuned on SQuAD version 2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SQuAD version 2 validation set:
```bash
exact_match = 58.38
f1 = 61.65
```
Mean and standard deviation for 5 runs on SQuAD version 2 validation set:
| | Exact Match | F1 |
|:----:|:-----------:|:----:|
| Adam | 54.80 ± 0.47 | 58.13 ± 0.31 |
| M-FAC | 58.02 ± 0.39 | 61.35 ± 0.24 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_qa.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-mini \
--dataset_name squad_v2 \
--version_2_with_negative \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 1e-4 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
MCUxDaredevil/DialoGPT-small-rick | fa67c7bea801ab0a15be6a0cd7aee7dc5fb85910 | 2021-10-31T19:55:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MCUxDaredevil | null | MCUxDaredevil/DialoGPT-small-rick | 4 | null | transformers | 17,962 | ---
tags:
- conversational
---
#Rick Sanchez DialoGPT Model |
Maelstrom77/roberta-large-mrpc | 47fbdb7535e9edcd37e2e385e41aa083d5eec799 | 2021-10-04T15:21:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Maelstrom77 | null | Maelstrom77/roberta-large-mrpc | 4 | null | transformers | 17,963 | Entry not found |
Maha/OGBV-gender-indicbert-ta-fire20_fin | d68d7efa27ad574417d9cefe0a765f338393eda6 | 2022-02-20T06:51:04.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers"
] | text-classification | false | Maha | null | Maha/OGBV-gender-indicbert-ta-fire20_fin | 4 | 1 | transformers | 17,964 | Entry not found |
Maha/hin-trac2 | a21e7b810415a5acf054a52587f8651f88883205 | 2022-02-22T04:20:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Maha | null | Maha/hin-trac2 | 4 | 1 | transformers | 17,965 | Entry not found |
MarcBrun/ixambert-finetuned-squad-eu-en | e77d274afd5b69d9f050fdf181176306efc21a31 | 2022-02-23T20:25:49.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"es",
"eu",
"dataset:squad",
"transformers",
"autotrain_compatible"
] | question-answering | false | MarcBrun | null | MarcBrun/ixambert-finetuned-squad-eu-en | 4 | null | transformers | 17,966 | ---
language:
- en
- es
- eu
datasets:
- squad
widget:
- text: "When was Florence Nightingale born?"
context: "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820."
example_title: "English"
- text: "¿Por qué provincias pasa el Tajo?"
context: "El Tajo es el río más largo de la península ibérica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinación hacia el suroeste, que se acentúa cuando llega a Portugal, donde recibe el nombre de Tejo.
Nace en los montes Universales, en la sierra de Albarracín, sobre la rama occidental del sistema Ibérico y, después de recorrer 1007 km, llega al océano Atlántico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m³/s. En sus primeros 816 km atraviesa España, donde discurre por cuatro comunidades autónomas (Aragón, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y Cáceres)."
example_title: "Español"
- text: "Zer beste izenak ditu Tartalo?"
context: "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote."
example_title: "Euskara"
---
# ixambert-base-cased finetuned for QA
This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1 and an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions in English, Spanish and Basque.
## Overview
* **Language model:** ixambert-base-cased
* **Languages:** English, Spanish and Basque
* **Downstream task:** Extractive QA
* **Training data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque
* **Eval data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque
* **Infrastructure:** 1x GeForce RTX 2080
## Outputs
The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
```python
{'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
```
## How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "MarcBrun/ixambert-finetuned-squad-eu-en"
# To get predictions
context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
question = "When was Florence Nightingale born?"
qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
pred = qa(question=question,context=context)
# To load the model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Hyperparameters
```
batch_size = 8
n_epochs = 3
learning_rate = 2e-5
optimizer = AdamW
lr_schedule = linear
max_seq_len = 384
doc_stride = 128
``` |
MarioPenguin/finetuned-model | ac1e6934e161ed1b2fd3983704425a33043a105f | 2022-01-29T11:18:13.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | MarioPenguin | null | MarioPenguin/finetuned-model | 4 | null | transformers | 17,967 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-model
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8601
- Accuracy: 0.6117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 84 | 0.8663 | 0.5914 |
| No log | 2.0 | 168 | 0.8601 | 0.6117 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
Maunish/kgrouping-roberta-large | e10c8cbb5dc28761ff968f05b5b80a4a17588fc4 | 2022-02-15T13:58:45.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Maunish | null | Maunish/kgrouping-roberta-large | 4 | null | transformers | 17,968 | Entry not found |
MelissaTESSA/distilbert-base-uncased-finetuned-cola | f21110269d8fbe7e7c81eaa1ea349d5e2d59fa9e | 2022-01-22T17:01:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | MelissaTESSA | null | MelissaTESSA/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 17,969 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5206791471093309
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6324
- Matthews Correlation: 0.5207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5245 | 1.0 | 535 | 0.5155 | 0.4181 |
| 0.3446 | 2.0 | 1070 | 0.5623 | 0.4777 |
| 0.2331 | 3.0 | 1605 | 0.6324 | 0.5207 |
| 0.1678 | 4.0 | 2140 | 0.7706 | 0.5106 |
| 0.1255 | 5.0 | 2675 | 0.8852 | 0.4998 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
MickyMike/0-GPT2SP-duracloud | 09a64c1873fc7bf1bb2eb966c3fc11a23ab9f66c | 2021-08-19T02:00:58.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-duracloud | 4 | null | transformers | 17,970 | Entry not found |
MickyMike/0-GPT2SP-mesos | 6ba62901a54f0fe9b4bae5aeadcf27c1915264e4 | 2021-08-19T02:01:26.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-mesos | 4 | null | transformers | 17,971 | Entry not found |
MickyMike/0-GPT2SP-mule | 37c5e4734759c81626a1c0e4ef1e937633f55b39 | 2021-08-19T02:01:53.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-mule | 4 | null | transformers | 17,972 | Entry not found |
MickyMike/0-GPT2SP-springxd | e02baf39bdc93c9fb5551a10d99552b44a634ebe | 2021-08-19T02:02:19.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-springxd | 4 | null | transformers | 17,973 | Entry not found |
MickyMike/0-GPT2SP-titanium | 829fa29a3c96f19520e9f8b32a83300157cc188f | 2021-08-19T02:02:56.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-titanium | 4 | null | transformers | 17,974 | Entry not found |
MickyMike/00-GPT2SP-appceleratorstudio-aptanastudio | c72e62c776bc72747155cd5eea9f95c8656e6199 | 2021-08-15T06:51:13.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/00-GPT2SP-appceleratorstudio-aptanastudio | 4 | null | transformers | 17,975 | Entry not found |
MickyMike/00-GPT2SP-appceleratorstudio-titanium | 3defebd169f919d8554b55b712def36a58dbda62 | 2021-08-15T06:58:34.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/00-GPT2SP-appceleratorstudio-titanium | 4 | null | transformers | 17,976 | Entry not found |
MickyMike/00-GPT2SP-aptanastudio-titanium | 3ed2cb56c96b5da77f947f117ccc4e461b938f67 | 2021-08-15T07:26:55.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/00-GPT2SP-aptanastudio-titanium | 4 | null | transformers | 17,977 | Entry not found |
MickyMike/00-GPT2SP-titanium-appceleratorstudio | 2ada9aef662f632728cc0f99c745aea8f7aaebbb | 2021-08-15T07:11:41.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/00-GPT2SP-titanium-appceleratorstudio | 4 | null | transformers | 17,978 | Entry not found |
MickyMike/000-GPT2SP-appceleratorstudio-mule | f0713afb93b200a1e96d0617f5b99879a31115db | 2021-08-15T12:39:26.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/000-GPT2SP-appceleratorstudio-mule | 4 | null | transformers | 17,979 | Entry not found |
MickyMike/000-GPT2SP-appceleratorstudio-mulestudio | 58ed508e16849284ebcccaa9ecf6663cc50c4dde | 2021-08-15T12:32:28.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/000-GPT2SP-appceleratorstudio-mulestudio | 4 | null | transformers | 17,980 | Entry not found |
MickyMike/000-GPT2SP-mulestudio-titanium | 1f7f52dc8edaaf21e6534feb01b46e73854e5694 | 2021-08-15T12:26:05.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/000-GPT2SP-mulestudio-titanium | 4 | null | transformers | 17,981 | Entry not found |
MickyMike/000-GPT2SP-talenddataquality-appceleratorstudio | ad0bebc614858312ec4142b3370dfb1725e5cd7d | 2021-08-15T11:55:46.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/000-GPT2SP-talenddataquality-appceleratorstudio | 4 | null | transformers | 17,982 | Entry not found |
MickyMike/000-GPT2SP-talenddataquality-aptanastudio | 2fb0fac046ccd1d479c3930dcae3682c5b5a9c20 | 2021-08-15T11:30:36.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/000-GPT2SP-talenddataquality-aptanastudio | 4 | null | transformers | 17,983 | Entry not found |
MickyMike/1-GPT2SP-appceleratorstudio | eb49e972c41a3d4b3dd8300b8062f2e3ff27e74e | 2021-08-15T12:50:06.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-appceleratorstudio | 4 | null | transformers | 17,984 | Entry not found |
MickyMike/1-GPT2SP-aptanastudio | c2206d9b85e30acbebbdc47c9cc22574decfb091 | 2021-08-15T12:56:50.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-aptanastudio | 4 | null | transformers | 17,985 | Entry not found |
MickyMike/1-GPT2SP-bamboo | 2f284831b013d228f01f3333cd77803c79125301 | 2021-08-15T13:03:35.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-bamboo | 4 | null | transformers | 17,986 | Entry not found |
MickyMike/1-GPT2SP-clover | 8718c21c96767b9c65f8d6dd49a4615494f1b809 | 2021-08-15T13:09:47.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-clover | 4 | null | transformers | 17,987 | Entry not found |
MickyMike/1-GPT2SP-jirasoftware | a6e10b370003c2f322fc300c015d6d78f58646a6 | 2021-08-15T13:27:41.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-jirasoftware | 4 | null | transformers | 17,988 | Entry not found |
MickyMike/1-GPT2SP-mesos | 39db072949aad26e70855207c66f947623be8131 | 2021-08-15T13:34:01.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-mesos | 4 | null | transformers | 17,989 | Entry not found |
MickyMike/1-GPT2SP-mulestudio | 6f46afdbfc431b08d317528cf695565e2c2aed70 | 2021-08-15T13:52:00.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-mulestudio | 4 | null | transformers | 17,990 | Entry not found |
MickyMike/1-GPT2SP-talenddataquality | d5c3d1a0d9731f7ca2d7c55b83876a293b39b108 | 2021-08-15T14:04:57.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-talenddataquality | 4 | null | transformers | 17,991 | Entry not found |
MickyMike/1-GPT2SP-talendesb | 745780d1686327f4870f23e40b4ec08f62af0ae0 | 2021-08-15T14:11:36.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-talendesb | 4 | null | transformers | 17,992 | Entry not found |
MickyMike/1-GPT2SP-titanium | 4772c9b78fcd22019217f27ac6cc41e349e807a7 | 2021-08-15T14:17:23.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-titanium | 4 | null | transformers | 17,993 | Entry not found |
MickyMike/1-GPT2SP-usergrid | 67a5c517bad6fbfa18419725106675050947dce8 | 2021-08-15T14:23:31.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-usergrid | 4 | null | transformers | 17,994 | Entry not found |
MickyMike/11-GPT2SP-appceleratorstudio-aptanastudio | a695b0390a604489670268d4b87ebb30eab23452 | 2021-08-15T23:40:10.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/11-GPT2SP-appceleratorstudio-aptanastudio | 4 | null | transformers | 17,995 | Entry not found |
MickyMike/11-GPT2SP-aptanastudio-titanium | ad5820c50cc84962dcdbed049044c0bf3de50a6f | 2021-08-15T23:58:24.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/11-GPT2SP-aptanastudio-titanium | 4 | null | transformers | 17,996 | Entry not found |
MickyMike/11-GPT2SP-mule-mulestudio | 96b283efdf9eb68ba46cd1a94039263d87a0cfb1 | 2021-08-16T00:04:35.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/11-GPT2SP-mule-mulestudio | 4 | null | transformers | 17,997 | Entry not found |
MickyMike/11-GPT2SP-mulestudio-mule | c904b0a92af29c484766235e627f9ca882f02308 | 2021-08-16T00:10:11.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/11-GPT2SP-mulestudio-mule | 4 | null | transformers | 17,998 | Entry not found |
MickyMike/11-GPT2SP-titanium-appceleratorstudio | b0b8121a458eff4a51ecea9931e370acd2b4092c | 2021-08-15T23:53:03.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | MickyMike | null | MickyMike/11-GPT2SP-titanium-appceleratorstudio | 4 | null | transformers | 17,999 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.