modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-sv-ny | 83832a4b732092ddd7b8a2cb8b416ce4bcce28c1 | 2021-09-10T14:08:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"ny",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-ny | 6 | null | transformers | 14,900 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ny
* source languages: sv
* target languages: ny
* OPUS readme: [sv-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ny/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ny/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ny/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ny/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ny | 25.9 | 0.523 |
|
Helsinki-NLP/opus-mt-sv-sm | 5c4903194f355a7b29d55d1e67dbaaa7ff6d4397 | 2021-09-10T14:09:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"sm",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-sm | 6 | null | transformers | 14,901 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-sm
* source languages: sv
* target languages: sm
* OPUS readme: [sv-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sm/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sm/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sm/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sm/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.sm | 30.1 | 0.500 |
|
Helsinki-NLP/opus-mt-sv-sn | ea313487acac72a19245edd4c843142c45971fbd | 2021-09-10T14:09:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"sn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-sn | 6 | null | transformers | 14,902 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-sn
* source languages: sv
* target languages: sn
* OPUS readme: [sv-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.sn | 27.4 | 0.557 |
|
Helsinki-NLP/opus-mt-sv-srn | c936b35e2e5372f6874e7dc32437d64269ab6d94 | 2021-09-10T14:09:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"srn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-srn | 6 | null | transformers | 14,903 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-srn
* source languages: sv
* target languages: srn
* OPUS readme: [sv-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-srn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.srn | 31.3 | 0.506 |
|
Helsinki-NLP/opus-mt-sv-umb | 6636d42b0ce8125dc464b04b4218779d2722eebd | 2021-09-10T14:10:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"umb",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-umb | 6 | null | transformers | 14,904 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-umb
* source languages: sv
* target languages: umb
* OPUS readme: [sv-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-umb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.umb | 20.4 | 0.431 |
|
Helsinki-NLP/opus-mt-sv-war | 30eaac3c1c19fe87703043d0124663304a71bf8b | 2021-09-11T10:47:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"war",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-war | 6 | null | transformers | 14,905 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-war
* source languages: sv
* target languages: war
* OPUS readme: [sv-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.war | 36.7 | 0.576 |
|
Helsinki-NLP/opus-mt-swc-fi | 7e5770742cdef48b5e511269536efd8b23e01403 | 2021-09-11T10:47:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"swc",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-swc-fi | 6 | null | transformers | 14,906 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-swc-fi
* source languages: swc
* target languages: fi
* OPUS readme: [swc-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.fi | 26.0 | 0.489 |
|
Helsinki-NLP/opus-mt-tiv-sv | de6630eda2a84f548d8447b0ed52ca0187153e5f | 2021-09-11T10:48:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tiv",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tiv-sv | 6 | null | transformers | 14,907 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tiv-sv
* source languages: tiv
* target languages: sv
* OPUS readme: [tiv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tiv.sv | 23.7 | 0.416 |
|
Helsinki-NLP/opus-mt-tll-sv | a0761f178b408385362f11a4c03af3234d1e5c83 | 2021-09-11T10:48:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tll",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tll-sv | 6 | null | transformers | 14,908 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tll-sv
* source languages: tll
* target languages: sv
* OPUS readme: [tll-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.sv | 25.6 | 0.436 |
|
Helsinki-NLP/opus-mt-tn-es | 9344644ad06dc9e24545b7d2ce6f692f9bbda19c | 2021-09-11T10:48:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tn",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tn-es | 6 | null | transformers | 14,909 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tn-es
* source languages: tn
* target languages: es
* OPUS readme: [tn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tn.es | 29.1 | 0.479 |
|
Helsinki-NLP/opus-mt-uk-no | d8acdc2b34020958795f1bb9a843e6c58d9eba3b | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"no",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-no | 6 | null | transformers | 14,910 | ---
language:
- uk
- no
tags:
- translation
license: apache-2.0
---
### ukr-nor
* source group: Ukrainian
* target group: Norwegian
* OPUS readme: [ukr-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nor/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.nor | 21.3 | 0.397 |
### System Info:
- hf_name: ukr-nor
- source_languages: ukr
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'no']
- src_constituents: {'ukr'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: nor
- short_pair: uk-no
- chrF2_score: 0.397
- bleu: 21.3
- brevity_penalty: 0.966
- ref_len: 4378.0
- src_name: Ukrainian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: no
- prefer_old: False
- long_pair: ukr-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-uk-sl | fcf188f9c2190bfd1c79ce6d7f383dd0524a155b | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"sl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-sl | 6 | null | transformers | 14,911 | ---
language:
- uk
- sl
tags:
- translation
license: apache-2.0
---
### ukr-slv
* source group: Ukrainian
* target group: Slovenian
* OPUS readme: [ukr-slv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-slv/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): slv
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.slv | 11.8 | 0.280 |
### System Info:
- hf_name: ukr-slv
- source_languages: ukr
- target_languages: slv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-slv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'sl']
- src_constituents: {'ukr'}
- tgt_constituents: {'slv'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: slv
- short_pair: uk-sl
- chrF2_score: 0.28
- bleu: 11.8
- brevity_penalty: 1.0
- ref_len: 3823.0
- src_name: Ukrainian
- tgt_name: Slovenian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: sl
- prefer_old: False
- long_pair: ukr-slv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-urj-en | fe92897a53bf5a49330b75270775a685fc621301 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"se",
"fi",
"hu",
"et",
"urj",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-urj-en | 6 | null | transformers | 14,912 | ---
language:
- se
- fi
- hu
- et
- urj
- en
tags:
- translation
license: apache-2.0
---
### urj-eng
* source group: Uralic languages
* target group: English
* OPUS readme: [urj-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-eng/README.md)
* model: transformer
* source language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi-fineng.fin.eng | 22.7 | 0.511 |
| newsdev2018-enet-esteng.est.eng | 26.6 | 0.545 |
| newssyscomb2009-huneng.hun.eng | 21.3 | 0.493 |
| newstest2009-huneng.hun.eng | 20.1 | 0.487 |
| newstest2015-enfi-fineng.fin.eng | 23.9 | 0.521 |
| newstest2016-enfi-fineng.fin.eng | 25.8 | 0.542 |
| newstest2017-enfi-fineng.fin.eng | 28.9 | 0.562 |
| newstest2018-enet-esteng.est.eng | 27.0 | 0.552 |
| newstest2018-enfi-fineng.fin.eng | 21.2 | 0.492 |
| newstest2019-fien-fineng.fin.eng | 25.3 | 0.531 |
| newstestB2016-enfi-fineng.fin.eng | 21.3 | 0.500 |
| newstestB2017-enfi-fineng.fin.eng | 24.4 | 0.528 |
| newstestB2017-fien-fineng.fin.eng | 24.4 | 0.528 |
| Tatoeba-test.chm-eng.chm.eng | 0.8 | 0.131 |
| Tatoeba-test.est-eng.est.eng | 34.5 | 0.526 |
| Tatoeba-test.fin-eng.fin.eng | 28.1 | 0.485 |
| Tatoeba-test.fkv-eng.fkv.eng | 6.8 | 0.335 |
| Tatoeba-test.hun-eng.hun.eng | 25.1 | 0.452 |
| Tatoeba-test.izh-eng.izh.eng | 11.6 | 0.224 |
| Tatoeba-test.kom-eng.kom.eng | 2.4 | 0.110 |
| Tatoeba-test.krl-eng.krl.eng | 18.6 | 0.365 |
| Tatoeba-test.liv-eng.liv.eng | 0.5 | 0.078 |
| Tatoeba-test.mdf-eng.mdf.eng | 1.5 | 0.117 |
| Tatoeba-test.multi.eng | 47.8 | 0.646 |
| Tatoeba-test.myv-eng.myv.eng | 0.5 | 0.101 |
| Tatoeba-test.sma-eng.sma.eng | 1.2 | 0.110 |
| Tatoeba-test.sme-eng.sme.eng | 1.5 | 0.147 |
| Tatoeba-test.udm-eng.udm.eng | 1.0 | 0.130 |
### System Info:
- hf_name: urj-eng
- source_languages: urj
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['se', 'fi', 'hu', 'et', 'urj', 'en']
- src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.test.txt
- src_alpha3: urj
- tgt_alpha3: eng
- short_pair: urj-en
- chrF2_score: 0.6459999999999999
- bleu: 47.8
- brevity_penalty: 0.993
- ref_len: 70882.0
- src_name: Uralic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: urj
- tgt_alpha2: en
- prefer_old: False
- long_pair: urj-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-war-sv | 495e9d4cfbe74466c2acf59971382430c5d36f38 | 2021-09-11T10:52:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"war",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-war-sv | 6 | null | transformers | 14,913 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-war-sv
* source languages: war
* target languages: sv
* OPUS readme: [war-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.sv | 31.4 | 0.505 |
|
Helsinki-NLP/opus-mt-xh-sv | a99d2b8a379cc558a0cc71612eff0a2e5566eaec | 2021-09-11T10:52:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"xh",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-xh-sv | 6 | null | transformers | 14,914 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-xh-sv
* source languages: xh
* target languages: sv
* OPUS readme: [xh-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.sv | 33.1 | 0.522 |
|
Helsinki-NLP/opus-mt-yo-es | f4c8447391f383f0d0ba134023c7048654d2ba52 | 2021-09-11T10:52:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"yo",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-yo-es | 6 | null | transformers | 14,915 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yo-es
* source languages: yo
* target languages: es
* OPUS readme: [yo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.es | 22.0 | 0.393 |
|
Helsinki-NLP/opus-tatoeba-en-ro | 6c507feea44019431df9a4a52c4dbc587e30b409 | 2021-11-08T07:32:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ro",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-en-ro | 6 | null | transformers | 14,916 | ---
language:
- en
- ro
tags:
- translation
license: apache-2.0
---
### en-ro
* source group: English
* target group: Romanian
* OPUS readme: [eng-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ron/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): mol ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* valid language labels:
* download original weights: [opus+bt-2021-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.zip)
* test set translations: [opus+bt-2021-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.test.txt)
* test set scores: [opus+bt-2021-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2016-enro.eng-ron | 33.5 | 0.610 | 1999 | 51566 | 0.984 |
| newstest2016-enro.eng-ron | 31.7 | 0.591 | 1999 | 49094 | 0.998 |
| Tatoeba-test.eng-ron | 46.9 | 0.678 | 5000 | 36851 | 0.983 |
### System Info:
- hf_name: en-ro
- source_languages: eng
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ro']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Romanian', {'ron'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-ron
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.test.txt
- src_alpha3: eng
- tgt_alpha3: ron
- chrF2_score: 0.678
- bleu: 46.9
- src_name: English
- tgt_name: Romanian
- train_date: 2021-03-07 00:00:00
- src_alpha2: en
- tgt_alpha2: ro
- prefer_old: False
- short_pair: en-ro
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-11-08-09:31 |
Helsinki-NLP/opus-tatoeba-fi-en | c81186146e48f374f8e02a7c0e0dc29b6f9649a3 | 2021-11-08T09:16:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-fi-en | 6 | 1 | transformers | 14,917 | ---
language:
- fi
- en
tags:
- translation
license: apache-2.0
---
### fi-en
* source group: Finnish
* target group: English
* OPUS readme: [fin-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md)
* model: transformer-align
* source language(s): fin
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807+bt-2021-08-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip)
* test set translations: [opusTCv20210807+bt-2021-08-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt)
* test set scores: [opusTCv20210807+bt-2021-08-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2015-enfi.fin-eng | 27.1 | 0.550 | 1500 | 32104 | 0.988 |
| newstest2015-enfi.fin-eng | 28.5 | 0.560 | 1370 | 27356 | 0.980 |
| newstest2016-enfi.fin-eng | 31.7 | 0.586 | 3000 | 63043 | 1.000 |
| newstest2017-enfi.fin-eng | 34.6 | 0.610 | 3002 | 61936 | 0.988 |
| newstest2018-enfi.fin-eng | 25.4 | 0.530 | 3000 | 62325 | 0.981 |
| newstest2019-fien.fin-eng | 30.6 | 0.577 | 1996 | 36227 | 0.994 |
| newstestB2016-enfi.fin-eng | 25.8 | 0.538 | 3000 | 63043 | 0.987 |
| newstestB2017-enfi.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 |
| newstestB2017-fien.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 |
| Tatoeba-test-v2021-08-07.fin-eng | 54.1 | 0.700 | 10000 | 75212 | 0.988 |
### System Info:
- hf_name: fi-en
- source_languages: fin
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fi', 'en']
- src_constituents: ('Finnish', {'fin'})
- tgt_constituents: ('English', {'eng'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fin-eng
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt
- src_alpha3: fin
- tgt_alpha3: eng
- chrF2_score: 0.7
- bleu: 54.1
- src_name: Finnish
- tgt_name: English
- train_date: 2021-08-25 00:00:00
- src_alpha2: fi
- tgt_alpha2: en
- prefer_old: False
- short_pair: fi-en
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-11-04-21:36 |
HenryHXR/t5-base-finetuned-scitldr | c475ada3b27599a7aa47f0a048707e0f217e1889 | 2022-02-05T05:48:10.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | HenryHXR | null | HenryHXR/t5-base-finetuned-scitldr | 6 | null | transformers | 14,918 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-scitldr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-scitldr
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0232
- Rouge1: 35.2134
- Rouge2: 16.8919
- Rougel: 30.8442
- Rougelsum: 30.9316
- Gen Len: 18.7981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.0533 | 1.0 | 996 | 2.0285 | 34.9774 | 16.6163 | 30.6177 | 30.7038 | 18.7981 |
| 2.0994 | 2.0 | 1992 | 2.0232 | 35.2134 | 16.8919 | 30.8442 | 30.9316 | 18.7981 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.3 | f39188e6ade4f4dc78041e381a683201bfc6dd91 | 2021-11-20T09:09:42.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.3 | 6 | null | transformers | 14,919 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.2-concept-extraction-wikipedia-v1.2 | e8bb9007b60886b72928bdcb473e835912da2896 | 2021-11-18T19:40:52.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.2-concept-extraction-wikipedia-v1.2 | 6 | null | transformers | 14,920 | Entry not found |
IMSyPP/hate_speech_targets_slo | 366f4e53b63595adc87f25f79a3d940dba1e9c86 | 2022-05-16T06:14:31.000Z | [
"pytorch",
"camembert",
"text-classification",
"sl",
"transformers",
"license:mit"
]
| text-classification | false | IMSyPP | null | IMSyPP/hate_speech_targets_slo | 6 | null | transformers | 14,921 | ---
language:
- sl
license: mit
--- |
InfoCoV/Senti-Cro-CoV-cseBERT | c6ddd6d8b929f838e2b6db0059ed5174edec0e38 | 2022-02-14T09:53:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | InfoCoV | null | InfoCoV/Senti-Cro-CoV-cseBERT | 6 | null | transformers | 14,922 | Entry not found |
ItuThesis2022MlviNikw/bert-base-uncased | 5fa5ab9f07d13e1d46d28e10df6febe2441a15ca | 2021-11-15T09:22:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ItuThesis2022MlviNikw | null | ItuThesis2022MlviNikw/bert-base-uncased | 6 | null | transformers | 14,923 | Entry not found |
JBNLRY/distilbert-base-uncased-finetuned-cola | 9a163d990397209b4c4b853c9caaf583a4dc211c | 2022-02-17T19:56:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JBNLRY | null | JBNLRY/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 14,924 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5471613867597194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8366
- Matthews Correlation: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5432 | 0.4243 |
| 0.3447 | 2.0 | 1070 | 0.4968 | 0.5187 |
| 0.2347 | 3.0 | 1605 | 0.6540 | 0.5280 |
| 0.1747 | 4.0 | 2140 | 0.7547 | 0.5367 |
| 0.1255 | 5.0 | 2675 | 0.8366 | 0.5472 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a | 1f01a81f39fac289d2d7d1864cd121362ac94a98 | 2021-11-19T20:43:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| multiple-choice | false | JazibEijaz | null | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a | 6 | null | transformers | 14,925 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
name: bert-base-uncased-finetuned-semeval2020-task4a-e2-b32-l5e5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4a
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
- Loss: 0.2782
- Accuracy: 0.9040
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.2700 | 0.8940 |
| 0.349 | 2.0 | 688 | 0.2782 | 0.9040 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5 | 7379c4e9fa952904e24cf8d9a81bb26ac355b3bf | 2021-11-06T01:17:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| multiple-choice | false | JazibEijaz | null | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5 | 6 | null | transformers | 14,926 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
name: bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b-append-e3-b32-l4e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5121
- Accuracy: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.3603 | 0.8550 |
| 0.3894 | 2.0 | 688 | 0.4011 | 0.8630 |
| 0.1088 | 3.0 | 1032 | 0.5121 | 0.8700 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
LysandreJik/testing | cfc35923cfb6c1e94d54296051e3dad3f3dcdad7 | 2021-09-22T19:19:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | LysandreJik | null | LysandreJik/testing | 6 | null | transformers | 14,927 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: testing
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6813725490196079
- name: F1
type: f1
value: 0.8104956268221574
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6644
- Accuracy: 0.6814
- F1: 0.8105
- Combined Score: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Jipski/gpt2-Flo-BasBoettcher | 18d8c667bcfa2896ee7cbbff65c25243ff5eafd8 | 2021-12-06T21:44:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Jipski | null | Jipski/gpt2-Flo-BasBoettcher | 6 | null | transformers | 14,928 | Entry not found |
JonatanGk/roberta-base-ca-finetuned-hate-speech-offensive-catalan | 4a17bacc10f6be55d75bfc4335bff204066b54b4 | 2021-10-18T17:10:50.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JonatanGk | null | JonatanGk/roberta-base-ca-finetuned-hate-speech-offensive-catalan | 6 | 1 | transformers | 14,929 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-ca-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ca-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4137
- Accuracy: 0.8778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3699 | 1.0 | 1255 | 0.3712 | 0.8669 |
| 0.3082 | 2.0 | 2510 | 0.3401 | 0.8766 |
| 0.2375 | 3.0 | 3765 | 0.4137 | 0.8778 |
| 0.1889 | 4.0 | 5020 | 0.4671 | 0.8733 |
| 0.1486 | 5.0 | 6275 | 0.5205 | 0.8749 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k | 18d610687ab2e575524ef9ceadf08051533b8cce | 2021-09-23T15:49:01.000Z | [
"pytorch",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
]
| audio-to-audio | false | JorisCos | null | JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k | 6 | null | asteroid | 14,930 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k`
Imported from [Zenodo](https://zenodo.org/record/3874420#.X9I6NcLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 9.944424856077259
si_sdr_imp: 11.939395359731192
sdr: 10.701526190782072
sdr_imp: 12.481757547845662
sir: 22.633644975545575
sir_imp: 22.45666740833025
sar: 11.131644100944868
sar_imp: 4.248489589311784
stoi: 0.852048619949357
stoi_imp: 0.2071994899565506
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under A[Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k | da1de55d48fd0f9ace052e79b942caac4ca1e564 | 2021-09-23T15:49:10.000Z | [
"pytorch",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
]
| audio-to-audio | false | JorisCos | null | JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k | 6 | null | asteroid | 14,931 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.978836560066222
si_sdr_imp: 10.388889689413096
sdr: 6.8651365291740225
sdr_imp: 10.928018056925016
sir: 14.997089638783114
sir_imp: 18.08248357801549
sar: 8.127504792061933
sar_imp: -0.7869320540959925
stoi: 0.7669414686111115
stoi_imp: 0.20416563213078837
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri3Mix_sepnoisy_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
Jung/t5-base | 2e6bc110434343c45956579d811db95cce26073f | 2021-06-23T02:31:04.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Jung | null | Jung/t5-base | 6 | null | transformers | 14,932 | Entry not found |
Jungwoo/distilbert-base-uncased-finetuned-cola | d6a1df9bcd6ea60a847908046fff7e45ef6e8699 | 2021-11-01T19:03:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Jungwoo | null | Jungwoo/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 14,933 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.541356878970505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7470
- Matthews Correlation: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5237 | 1.0 | 535 | 0.5327 | 0.4248 |
| 0.347 | 2.0 | 1070 | 0.5105 | 0.5239 |
| 0.2344 | 3.0 | 1605 | 0.6639 | 0.5224 |
| 0.1672 | 4.0 | 2140 | 0.7470 | 0.5414 |
| 0.1228 | 5.0 | 2675 | 0.8352 | 0.5377 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
bush/autonlp-bp-29016523 | 09c2c085674b6fbea0665f9eb28033290d2a284a | 2021-11-03T09:30:13.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Jush/autonlp-data-bp",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | bush | null | bush/autonlp-bp-29016523 | 6 | null | transformers | 14,934 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Jush/autonlp-data-bp
co2_eq_emissions: 3.273303707756322
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29016523
- CO2 Emissions (in grams): 3.273303707756322
## Validation Metrics
- Loss: 0.6093757748603821
- Accuracy: 0.8333333333333334
- Macro F1: 0.7937936978656889
- Micro F1: 0.8333333333333334
- Weighted F1: 0.8239843785760546
- Macro Precision: 0.8988882462566673
- Micro Precision: 0.8333333333333334
- Weighted Precision: 0.8404982541824647
- Macro Recall: 0.7805142534864643
- Micro Recall: 0.8333333333333334
- Weighted Recall: 0.8333333333333334
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Jush/autonlp-bp-29016523
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jush/autonlp-bp-29016523", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jush/autonlp-bp-29016523", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
KamSut/distilbert-base-uncased-finetuned-ner | 5f5f208f61b62dd3695dba0f60b8a87fee39233b | 2021-08-08T16:51:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | KamSut | null | KamSut/distilbert-base-uncased-finetuned-ner | 6 | null | transformers | 14,935 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9836370279759162
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9271
- Recall: 0.9381
- F1: 0.9326
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2324 | 1.0 | 878 | 0.0688 | 0.9146 | 0.9264 | 0.9205 | 0.9816 |
| 0.0517 | 2.0 | 1756 | 0.0620 | 0.9207 | 0.9329 | 0.9268 | 0.9829 |
| 0.0301 | 3.0 | 2634 | 0.0604 | 0.9271 | 0.9381 | 0.9326 | 0.9836 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Katsiaryna/distilbert-base-uncased-finetuned_9th_auc | aa5aab5f5aee08d0e9ea1ffde91eae08bdf4f86a | 2021-12-09T17:14:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/distilbert-base-uncased-finetuned_9th_auc | 6 | null | transformers | 14,936 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_40000-top3 | 652ebf39f196ac724a8e12ba4566134a878a491a | 2021-12-16T21:22:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_40000-top3 | 6 | null | transformers | 14,937 | Entry not found |
Kayvane/distilbert-undersampled-noweights | af96d880e033697ada5adcacc9efc8af6db2c59c | 2022-02-21T11:54:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Kayvane | null | Kayvane/distilbert-undersampled-noweights | 6 | null | transformers | 14,938 | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert-undersampled-noweights
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-undersampled-noweights
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Kayvane/distilbert-undersampled | 21580714c8a515804daefd68e77698ff2f3f1bef | 2022-02-20T22:37:06.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Kayvane | null | Kayvane/distilbert-undersampled | 6 | null | transformers | 14,939 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: distilbert-undersampled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-undersampled
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0826
- Accuracy: 0.9811
- F1: 0.9810
- Recall: 0.9811
- Precision: 0.9812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.0959 | 0.2 | 2000 | 0.0999 | 0.9651 | 0.9628 | 0.9651 | 0.9655 |
| 0.0618 | 0.41 | 4000 | 0.0886 | 0.9717 | 0.9717 | 0.9717 | 0.9731 |
| 0.159 | 0.61 | 6000 | 0.0884 | 0.9719 | 0.9720 | 0.9719 | 0.9728 |
| 0.0513 | 0.81 | 8000 | 0.0785 | 0.9782 | 0.9782 | 0.9782 | 0.9788 |
| 0.0219 | 1.01 | 10000 | 0.0680 | 0.9779 | 0.9779 | 0.9779 | 0.9783 |
| 0.036 | 1.22 | 12000 | 0.0745 | 0.9787 | 0.9787 | 0.9787 | 0.9792 |
| 0.0892 | 1.42 | 14000 | 0.0675 | 0.9786 | 0.9786 | 0.9786 | 0.9789 |
| 0.0214 | 1.62 | 16000 | 0.0760 | 0.9799 | 0.9798 | 0.9799 | 0.9801 |
| 0.0882 | 1.83 | 18000 | 0.0800 | 0.9800 | 0.9800 | 0.9800 | 0.9802 |
| 0.0234 | 2.03 | 20000 | 0.0720 | 0.9813 | 0.9813 | 0.9813 | 0.9815 |
| 0.0132 | 2.23 | 22000 | 0.0738 | 0.9803 | 0.9803 | 0.9803 | 0.9805 |
| 0.0136 | 2.43 | 24000 | 0.0847 | 0.9804 | 0.9804 | 0.9804 | 0.9806 |
| 0.0119 | 2.64 | 26000 | 0.0826 | 0.9811 | 0.9810 | 0.9811 | 0.9812 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Kieran/distilbert-base-uncased-finetuned-cola | fbbacaef6dea5282e1cb80ce175b229a89a58978 | 2021-08-22T18:53:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | Kieran | null | Kieran/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 14,940 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model_index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.9719066462260881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1037
- Matthews Correlation: 0.9719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2094 | 1.0 | 525 | 0.1069 | 0.9607 |
| 0.0483 | 2.0 | 1050 | 0.0878 | 0.9719 |
| 0.0296 | 3.0 | 1575 | 0.1263 | 0.9664 |
| 0.0108 | 4.0 | 2100 | 0.1037 | 0.9719 |
| 0.0096 | 5.0 | 2625 | 0.1065 | 0.9719 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Krassy/xlm-roberta-base-finetuned-marc-en | 8a4efe62548e2223fd6c87f099f0f65b424685d6 | 2021-10-22T16:06:45.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Krassy | null | Krassy/xlm-roberta-base-finetuned-marc-en | 6 | 1 | transformers | 14,941 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9005
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.108 | 1.0 | 235 | 0.9801 | 0.5610 |
| 0.9592 | 2.0 | 470 | 0.9005 | 0.5 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
LARACHNIDE/DialogGPT-small-sw | 491d8fd5ee6e700575587b4011ba3c26c7d052b4 | 2021-10-03T13:27:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | LARACHNIDE | null | LARACHNIDE/DialogGPT-small-sw | 6 | null | transformers | 14,942 | ---
tags:
- conversational
---
#VADER DialogGPT Model |
LaiJY/DialoGPTChatbot | 815f437606a1fd253bceb42b3ad90a6f0f223a23 | 2021-11-05T17:13:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | LaiJY | null | LaiJY/DialoGPTChatbot | 6 | null | transformers | 14,943 | ---
tags:
- conversational
---
# Dialogue From Persona 3 |
Lazaro97/results | 77875e92dd07d3e72fe2606d68b8b5bde6596ac9 | 2021-10-10T21:48:18.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Lazaro97 | null | Lazaro97/results | 6 | null | transformers | 14,944 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.8404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3793
- Accuracy: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3542 | 1.0 | 125 | 0.3611 | 0.839 |
| 0.2255 | 2.0 | 250 | 0.3793 | 0.8404 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
LegolasTheElf/Wav2Vec2_XLSR_Bengali_V2 | cb49f47519e2f96b95459657a30a20207d3bd260 | 2022-01-25T18:43:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2Vec2_XLSR_Bengali_V2 | 6 | null | transformers | 14,945 | Entry not found |
LilaBoualili/electra-sim-pair | fb8e8464a590e989f169639ee4f853f3e6f89f08 | 2021-05-18T14:13:57.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | LilaBoualili | null | LilaBoualili/electra-sim-pair | 6 | null | transformers | 14,946 | At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. |
Lumos/imdb4 | 435ca23f662a1191f7cb3acc99e3b6447d6013a4 | 2021-12-14T04:41:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Lumos | null | Lumos/imdb4 | 6 | null | transformers | 14,947 | Entry not found |
M-FAC/bert-tiny-finetuned-mnli | 618f766f89b50853abc1bea92fd38e1973818f0b | 2021-12-13T08:14:33.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
]
| text-classification | false | M-FAC | null | M-FAC/bert-tiny-finetuned-mnli | 6 | null | transformers | 14,948 | # BERT-tiny model finetuned with M-FAC
This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MNLI validation set:
```bash
matched_accuracy = 69.55
mismatched_accuracy = 70.58
```
Mean and standard deviation for 5 runs on MNLI validation set:
| | Matched Accuracy | Mismatched Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 65.36 ± 0.13 | 66.78 ± 0.15 |
| M-FAC | 68.28 ± 3.29 | 68.98 ± 3.05 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
M47Labs/binary_classification_arabic | 0c4fbe417094b85b0b4508039787d898a1f028b4 | 2022-01-03T15:43:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | M47Labs | null | M47Labs/binary_classification_arabic | 6 | null | transformers | 14,949 | Entry not found |
MMG/bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es | 58ff4a3113b3f212f45fdf42b65515949bc30b96 | 2021-12-20T08:10:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"es",
"dataset:squad_es",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | MMG | null | MMG/bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es | 6 | null | transformers | 14,950 | ---
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es
This model is a fine-tuned version of [MMG/bert-base-spanish-wwm-cased-finetuned-sqac](https://huggingface.co/MMG/bert-base-spanish-wwm-cased-finetuned-sqac) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2584
- {'exact': 63.358070500927646, 'f1': 70.22498384623977}
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
MarcBrun/ixambert-finetuned-squad | c7d342a9e1e9766e870888511ee0d65dead364a3 | 2022-02-23T20:30:44.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"es",
"eu",
"dataset:squad",
"transformers",
"autotrain_compatible"
]
| question-answering | false | MarcBrun | null | MarcBrun/ixambert-finetuned-squad | 6 | 1 | transformers | 14,951 | ---
language:
- en
- es
- eu
datasets:
- squad
widget:
- text: "When was Florence Nightingale born?"
context: "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820."
example_title: "English"
- text: "¿Por qué provincias pasa el Tajo?"
context: "El Tajo es el río más largo de la península ibérica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinación hacia el suroeste, que se acentúa cuando llega a Portugal, donde recibe el nombre de Tejo.
Nace en los montes Universales, en la sierra de Albarracín, sobre la rama occidental del sistema Ibérico y, después de recorrer 1007 km, llega al océano Atlántico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m³/s. En sus primeros 816 km atraviesa España, donde discurre por cuatro comunidades autónomas (Aragón, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y Cáceres)."
example_title: "Español"
- text: "Zer beste izenak ditu Tartalo?"
context: "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote."
example_title: "Euskara"
---
# ixambert-base-cased finetuned for QA
This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque.
## Overview
* **Language model:** ixambert-base-cased
* **Languages:** English, Spanish and Basque
* **Downstream task:** Extractive QA
* **Training data:** SQuAD v1.1
* **Eval data:** SQuAD v1.1
* **Infrastructure:** 1x GeForce RTX 2080
## Outputs
The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
```python
{'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
```
## How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "MarcBrun/ixambert-finetuned-squad"
# To get predictions
context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
question = "When was Florence Nightingale born?"
qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
pred = qa(question=question,context=context)
# To load the model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Hyperparameters
```
batch_size = 8
n_epochs = 3
learning_rate = 2e-5
optimizer = AdamW
lr_schedule = linear
max_seq_len = 384
doc_stride = 128
``` |
MarkusDressel/cord | 44cfb06ee38126de16b76bc4e21132868b12757c | 2021-12-04T15:58:52.000Z | [
"pytorch",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | MarkusDressel | null | MarkusDressel/cord | 6 | null | transformers | 14,952 | Entry not found |
Maxinstellar/outputs | a8074a9e182e1b54af4f8c9cd6bca66bb85c3516 | 2021-05-18T21:40:57.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maxinstellar | null | Maxinstellar/outputs | 6 | null | transformers | 14,953 | Entry not found |
MiBo/RepML | 06b3f43fbdcfbbe8f4a8a3f85e53618f6e72c05e | 2022-04-27T18:19:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | MiBo | null | MiBo/RepML | 6 | null | transformers | 14,954 | Entry not found |
MiBo/SABERT | 7eb5b4dd35d1e7165265d9c637ed4a827efcbf57 | 2021-07-06T13:06:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | MiBo | null | MiBo/SABERT | 6 | null | transformers | 14,955 | Entry not found |
MiBo/SAGPT2 | 57d3916f4b2bb795799f83c2a083ae5ee9d15083 | 2021-07-07T18:16:38.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MiBo | null | MiBo/SAGPT2 | 6 | 2 | transformers | 14,956 | Entry not found |
MickyMike/0-GPT2SP-appceleratorstudio | 2f5673f36ffd4e6c25066a84e77c487a4c4fbf76 | 2021-08-19T01:48:13.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-appceleratorstudio | 6 | null | transformers | 14,957 | Entry not found |
MickyMike/00-GPT2SP-mesos-usergrid | af258389906e7004332b5f50bbf15c07b9993c43 | 2021-08-15T06:37:37.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/00-GPT2SP-mesos-usergrid | 6 | null | transformers | 14,958 | Entry not found |
MickyMike/00-GPT2SP-usergrid-mesos | 8cd77d552f5f5f83fa216953b0a2fe6640e44c02 | 2021-08-15T06:44:39.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/00-GPT2SP-usergrid-mesos | 6 | null | transformers | 14,959 | Entry not found |
MickyMike/11-GPT2SP-appceleratorstudio-titanium | fec71412067087d72087f5f5c676be37c1e82b82 | 2021-08-15T23:46:31.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/11-GPT2SP-appceleratorstudio-titanium | 6 | null | transformers | 14,960 | Entry not found |
MickyMike/2-GPT2SP-talenddataquality | 824b070f70ecc219b7dd25df32ea585b253b338a | 2021-08-29T21:49:18.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/2-GPT2SP-talenddataquality | 6 | null | transformers | 14,961 | Entry not found |
MickyMike/22-GPT2SP-usergrid-mesos | a735ad3ef9bf65ab627f3540ea19356155d593ab | 2021-08-29T22:26:58.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/22-GPT2SP-usergrid-mesos | 6 | null | transformers | 14,962 | Entry not found |
MickyMike/6-GPT2SP-springxd | 837d5f8ac7bf15b6ab88c6371d605fe2f7d5512a | 2021-08-30T03:11:31.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/6-GPT2SP-springxd | 6 | null | transformers | 14,963 | Entry not found |
MickyMike/6-GPT2SP-titanium | 91f615bc1afe3193047e6e4b44c00be4b806d08e | 2021-08-30T03:41:08.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/6-GPT2SP-titanium | 6 | null | transformers | 14,964 | Entry not found |
MickyMike/666-GPT2SP-talendesb-mesos | 3576e2c75a52956c34964e2dc5d4fc3902f5034d | 2021-08-30T05:15:14.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/666-GPT2SP-talendesb-mesos | 6 | null | transformers | 14,965 | Entry not found |
MickyMike/7-GPT2SP-clover | e720103e5ee03d946e1f5e88d1d258de0df19181 | 2021-08-30T17:57:59.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/7-GPT2SP-clover | 6 | null | transformers | 14,966 | Entry not found |
MickyMike/7-GPT2SP-datamanagement | 2fdf732cdd78af91887fba8d544ad32f0e58397b | 2021-08-30T18:09:18.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/7-GPT2SP-datamanagement | 6 | null | transformers | 14,967 | Entry not found |
MickyMike/7-GPT2SP-talenddataquality | c9ff858c1b88c739671297c97d6910f81206dbd7 | 2021-08-30T19:20:54.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/7-GPT2SP-talenddataquality | 6 | null | transformers | 14,968 | Entry not found |
MickyMike/777-GPT2SP-appceleratorstudio-mule | 6558fa309219b800ffe1f490e67cc9fa5eb8ec31 | 2021-08-30T22:03:54.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/777-GPT2SP-appceleratorstudio-mule | 6 | null | transformers | 14,969 | Entry not found |
MickyMike/777-GPT2SP-appceleratorstudio-mulestudio | a9ce501c460580e794b932260823c763a9e13f3d | 2021-08-30T21:53:54.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/777-GPT2SP-appceleratorstudio-mulestudio | 6 | null | transformers | 14,970 | Entry not found |
MickyMike/777-GPT2SP-mule-titanium | 76622c3ba66c1e6920122998ec62496321b063bc | 2021-08-30T21:27:13.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/777-GPT2SP-mule-titanium | 6 | null | transformers | 14,971 | Entry not found |
MickyMike/777-GPT2SP-mulestudio-titanium | 35caa9c47ca1fad0687f49760fbb3721010ca64c | 2021-08-30T21:43:50.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/777-GPT2SP-mulestudio-titanium | 6 | null | transformers | 14,972 | Entry not found |
MickyMike/777-GPT2SP-talenddataquality-appceleratorstudio | c61859a83fa57ebfae0c1315eb1a0388f71e4faf | 2021-08-30T21:34:30.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/777-GPT2SP-talenddataquality-appceleratorstudio | 6 | null | transformers | 14,973 | Entry not found |
MickyMike/777-GPT2SP-talenddataquality-aptanastudio | f65221dec62f0e22ecaea43718fd7f426190dfc8 | 2021-08-30T21:19:51.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/777-GPT2SP-talenddataquality-aptanastudio | 6 | null | transformers | 14,974 | Entry not found |
Monsia/autonlp-tweets-classification-23044997 | 753e5e6b8fcb6a187461c519975b6959fff9640a | 2021-10-20T14:38:58.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Monsia/autonlp-data-tweets-classification",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Monsia | null | Monsia/autonlp-tweets-classification-23044997 | 6 | null | transformers | 14,975 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Monsia/autonlp-data-tweets-classification
co2_eq_emissions: 4.819872182577655
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 23044997
- CO2 Emissions (in grams): 4.819872182577655
## Validation Metrics
- Loss: 0.001594889909029007
- Accuracy: 0.9997478885667465
- Macro F1: 0.9991190902836993
- Micro F1: 0.9997478885667465
- Weighted F1: 0.9997476735518704
- Macro Precision: 0.9998014460161265
- Micro Precision: 0.9997478885667465
- Weighted Precision: 0.9997479944069787
- Macro Recall: 0.9984426545713851
- Micro Recall: 0.9997478885667465
- Weighted Recall: 0.9997478885667465
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Monsia/autonlp-tweets-classification-23044997
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
MoseliMotsoehli/JoBerta | b6044cd2ebeffbff8de880f3962d8217cb0a80a7 | 2021-05-20T12:12:08.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | MoseliMotsoehli | null | MoseliMotsoehli/JoBerta | 6 | null | transformers | 14,976 | Entry not found |
Muennighoff/SGPT-125M-mean-nli | e3eae5208183fab1cd297be8f369b98654c77c02 | 2022-02-21T06:20:14.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-125M-mean-nli | 6 | null | sentence-transformers | 14,977 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# SGPT-125M-mean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
MultiBertGunjanPatrick/multiberts-seed-0-1400k | eb17a90d6a2f61f5e7d2796d4387186907d195cc | 2021-10-04T04:57:39.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-1400k | 6 | null | transformers | 14,978 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 1400k (uncased)
Seed 0 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1400k')
model = BertModel.from_pretrained("multiberts-seed-0-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-200k | 0aa09910dafe3a682729bffd6ef24a4abd7f19c9 | 2021-10-04T04:56:03.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-200k | 6 | null | transformers | 14,979 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 200k (uncased)
Seed 0 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-200k')
model = BertModel.from_pretrained("multiberts-seed-0-200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-40k | 32df75170fe529947a81bcf2d1b1b311d8089e33 | 2021-10-04T04:55:04.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-40k | 6 | null | transformers | 14,980 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 40k (uncased)
Seed 0 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-40k')
model = BertModel.from_pretrained("multiberts-seed-0-40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-900k | 0630441d72637c824b6290952e54c373865698fd | 2021-10-04T04:57:01.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-900k | 6 | null | transformers | 14,981 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 900k (uncased)
Seed 0 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-900k')
model = BertModel.from_pretrained("multiberts-seed-0-900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-100k | b5f9fdaa545867f0e4ff15d9925ec34e040baf8f | 2021-10-04T04:59:08.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-100k | 6 | null | transformers | 14,982 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 100k (uncased)
Seed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-100k')
model = BertModel.from_pretrained("multiberts-seed-1-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-1300k | 194ef9290c4e79c123fc6248e411d4ecde787fc0 | 2021-10-04T05:01:09.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-1300k | 6 | null | transformers | 14,983 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 1300k (uncased)
Seed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1300k')
model = BertModel.from_pretrained("multiberts-seed-1-1300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-400k | c42404add28bc1af1c1b3ba65d007b86f2e57da5 | 2021-10-04T05:07:25.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-3",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-3-400k | 6 | null | transformers | 14,984 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 400k (uncased)
Seed 3 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-400k')
model = BertModel.from_pretrained("multiberts-seed-3-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Mythiie/DialoGPT-small-Modeus | f2d8bfdd1a1367bb650fe3ddf11dc3d7c301c94c | 2022-02-16T03:17:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Mythiie | null | Mythiie/DialoGPT-small-Modeus | 6 | null | transformers | 14,985 | ---
tags:
- conversational
---
# Modeus DialoGPT Model |
Narsil/tiny-distilbert | 0cbfba28f2e5d98488b25755d8c849b67982516b | 2021-07-27T15:27:45.000Z | [
"pytorch",
"tf",
"distilbert",
"transformers"
]
| null | false | Narsil | null | Narsil/tiny-distilbert | 6 | null | transformers | 14,986 | Entry not found |
Nokia/nlgp-docstring | 895a4b8d6482ab595f9bdec4fd2dfca78b078ba8 | 2021-10-06T14:13:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"python",
"arxiv:2108.05198",
"transformers",
"code completion",
"code generation",
"license:apache-2.0"
]
| text-generation | false | Nokia | null | Nokia/nlgp-docstring | 6 | null | transformers | 14,987 | ---
language:
- en
- python
tags:
- code completion
- code generation
license: "apache-2.0"
---
# NLGP docstring model
The NLGP docstring model was introduced in the paper [Natural Language-Guided Programming](https://arxiv.org/abs/2108.05198). The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language **intent** in a certain code **context** (see the example below).
Also see the [NLGP natural](https://huggingface.co/Nokia/nlgp-natural) model.
This work was carried out by a research team in Nokia Bell Labs.
**Context**
```py
import matplotlib.pyplot as plt
values = [1, 2, 3, 4]
labels = ["a", "b", "c", "d"]
```
**Intent**
```py
# plot a bart chart
```
**Prediction**
```py
plt.bar(labels, values)
plt.show()
```
## Usage
```py
import re
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
# load the model
tok = GPT2TokenizerFast.from_pretrained("Nokia/nlgp-docstring")
model = GPT2LMHeadModel.from_pretrained("Nokia/nlgp-docstring")
# preprocessing functions
num_spaces = [2, 4, 6, 8, 10, 12, 14, 16, 18]
def preprocess(context, query):
"""
Encodes context + query as a single string and
replaces whitespace with special tokens <|2space|>, <|4space|>, ...
"""
input_str = f"{context}\n{query} <|endofcomment|>\n"
indentation_symbols = {n: f"<|{n}space|>" for n in num_spaces}
m = re.match("^[ ]+", input_str)
if not m:
return input_str
leading_whitespace = m.group(0)
N = len(leading_whitespace)
for n in self.num_spaces:
leading_whitespace = leading_whitespace.replace(n * " ", self.indentation_symbols[n])
return leading_whitespace + input_str[N:]
detokenize_pattern = re.compile(fr"<\|(\d+)space\|>")
def postprocess(output):
output = output.split("<|cell|>")[0]
def insert_space(m):
num_spaces = int(m.group(1))
return num_spaces * " "
return detokenize_pattern.sub(insert_space, output)
# inference
code_context = """
import matplotlib.pyplot as plt
values = [1, 2, 3, 4]
labels = ["a", "b", "c", "d"]
"""
query = "# plot a bar chart"
input_str = preprocess(code_context, query)
input_ids = tok(input_str, return_tensors="pt").input_ids
max_length = 150 # don't generate output longer than this length
total_max_length = min(1024 - input_ids.shape[-1], input_ids.shape[-1] + 150) # total = input + output
input_and_output = model.generate(
input_ids=input_ids,
max_length=total_max_length,
min_length=10,
do_sample=False,
num_beams=4,
early_stopping=True,
eos_token_id=tok.encode("<|cell|>")[0]
)
output = input_and_output[:, input_ids.shape[-1]:] # remove the tokens that correspond to the input_str
output_str = tok.decode(output[0])
postprocess(output_str)
```
## License and copyright
Copyright 2021 Nokia
Licensed under the Apache License 2.0
SPDX-License-Identifier: Apache-2.0 |
Omar95farag/distilbert-base-uncased-distilled-clinc | 42bc75a6ee8947edbaf6aaa6a17ffea1da00d332 | 2022-02-24T01:25:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Omar95farag | null | Omar95farag/distilbert-base-uncased-distilled-clinc | 6 | null | transformers | 14,988 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9332258064516129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1259
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 0.5952 | 0.7355 |
| 0.7663 | 2.0 | 636 | 0.3130 | 0.8742 |
| 0.7663 | 3.0 | 954 | 0.2024 | 0.9206 |
| 0.3043 | 4.0 | 1272 | 0.1590 | 0.9235 |
| 0.181 | 5.0 | 1590 | 0.1378 | 0.9303 |
| 0.181 | 6.0 | 1908 | 0.1287 | 0.9329 |
| 0.1468 | 7.0 | 2226 | 0.1259 | 0.9332 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Osiris/neutral_non_neutral_classifier | 234bde5bd078bc16a8346defbbc89dcf5f945a71 | 2021-11-13T21:54:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Osiris | null | Osiris/neutral_non_neutral_classifier | 6 | 2 | transformers | 14,989 | ### Introduction:
This model belongs to text-classification. You can check whether the sentence consists any emotion.
### Label Explaination:
LABEL_1: Non Neutral (have some emotions)
LABEL_0: Neutral (have no emotion)
### Usage:
```python
>>> from transformers import pipeline
>>> nnc = pipeline('text-classification', model='Osiris/neutral_non_neutral_classifier')
>>> nnc("Hello, I'm a good model.")
```
### Accuracy:
We reach 93.98% for validation dataset, and 91.92% for test dataset. |
Pkrawczak/distilbert-base-uncased-finetuned-cola | 9a8075156a99e8b17845e69a34d3e240b92ab765 | 2021-11-24T10:28:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Pkrawczak | null | Pkrawczak/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 14,990 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5285049056800905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6015
- Matthews Correlation: 0.5285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5266 | 1.0 | 535 | 0.5474 | 0.4015 |
| 0.3561 | 2.0 | 1070 | 0.4830 | 0.5214 |
| 0.2416 | 3.0 | 1605 | 0.6015 | 0.5285 |
| 0.1695 | 4.0 | 2140 | 0.7748 | 0.5162 |
| 0.1302 | 5.0 | 2675 | 0.8369 | 0.5268 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Pyjay/bert-base-dutch-cased-finetuned-gv | 61febcb633a84583c94ae1d56043d3d81c4799ce | 2021-07-23T08:54:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| fill-mask | false | Pyjay | null | Pyjay/bert-base-dutch-cased-finetuned-gv | 6 | null | transformers | 14,991 | ---
tags:
- generated_from_trainer
model_index:
- name: bert-base-dutch-cased-finetuned-gv
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-gv
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4741 | 1.0 | 2603 | 1.8404 |
| 1.2384 | 2.0 | 5206 | 1.8457 |
| 1.2121 | 3.0 | 7809 | 1.7837 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Pyke/DS-config-19 | 6d8e6baa92cab13adce7a266a1c90648fdd0db0d | 2021-08-22T18:35:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Pyke | null | Pyke/DS-config-19 | 6 | null | transformers | 14,992 | Entry not found |
Pyke/bart-finetuned-on-patent-Deepspeed-Test002 | f4a1b358f8a7c10b6fe0ce89d32ba6c9825ab074 | 2021-08-16T16:21:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-Test002 | 6 | null | transformers | 14,993 | Entry not found |
Pyke/bart-finetuned-with-patent | a3bb24a0fb5b37251018b19839b6735d083c68bc | 2021-08-06T18:55:10.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Pyke | null | Pyke/bart-finetuned-with-patent | 6 | null | transformers | 14,994 | This model is finetuned by Qichang Zheng(Pyke) based on bart with patent abstract dataset(7 million records), with 'facebook/bart-base' being the tokenizer and original model. The input is the same as the output, which is the patent abstract.
This model is finetuned to serve as a reference to the research that Qichang is in. |
QCRI/PropagandaTechniquesAnalysis-en-BERT | 1f096778870946b6200058c444f576e4e0eede97 | 2021-05-19T11:27:07.000Z | [
"pytorch",
"bert",
"en",
"transformers",
"propaganda",
"license:mit"
]
| null | false | QCRI | null | QCRI/PropagandaTechniquesAnalysis-en-BERT | 6 | 2 | transformers | 14,995 | ---
language: "en"
thumbnail: "https://pbs.twimg.com/profile_images/1092721745994440704/d6R-AHzj_400x400.jpg"
tags:
- propaganda
- bert
license: "MIT"
datasets:
-
metrics:
-
---
Propaganda Techniques Analysis BERT
----
This model is a BERT based model to make predictions of propaganda techniques in
news articles in English. The model is described in
[this paper](https://propaganda.qcri.org/papers/EMNLP_2019__Fine_Grained_Propaganda_Detection.pdf).
## Model description
Please find propaganda definition here:
https://propaganda.qcri.org/annotations/definitions.html
You can also try the model in action here: https://www.tanbih.org/prta
### How to use
```python
>>> from transformers import BertTokenizerFast
>>> from .model import BertForTokenAndSequenceJointClassification
>>>
>>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
>>> model = BertForTokenAndSequenceJointClassification.from_pretrained(
>>> "QCRI/PropagandaTechniquesAnalysis-en-BERT",
>>> revision="v0.1.0",
>>> )
>>>
>>> inputs = tokenizer.encode_plus("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> sequence_class_index = torch.argmax(outputs.sequence_logits, dim=-1)
>>> sequence_class = model.sequence_tags[sequence_class_index[0]]
>>> token_class_index = torch.argmax(outputs.token_logits, dim=-1)
>>> tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0][1:-1])
>>> tags = [model.token_tags[i] for i in token_class_index[0].tolist()[1:-1]]
```
### BibTeX entry and citation info
```bibtex
@inproceedings{da-san-martino-etal-2019-fine,
title = "Fine-Grained Analysis of Propaganda in News Article",
author = "Da San Martino, Giovanni and
Yu, Seunghak and
Barr{\'o}n-Cede{\~n}o, Alberto and
Petrov, Rostislav and
Nakov, Preslav",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1565",
doi = "10.18653/v1/D19-1565",
pages = "5636--5646",
abstract = "Propaganda aims at influencing people{'}s mindset with the purpose of advancing a specific agenda. Previous work has addressed propaganda detection at document level, typically labelling all articles from a propagandistic news outlet as propaganda. Such noisy gold labels inevitably affect the quality of any learning system trained on them. A further issue with most existing systems is the lack of explainability. To overcome these limitations, we propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. In particular, we create a corpus of news articles manually annotated at fragment level with eighteen propaganda techniques and propose a suitable evaluation measure. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.",
}
```
|
QuickRead/fine-tune-Pegasus | 8bf8f5530f226a5c7214778ef4b11cc4fd315296 | 2022-02-25T12:13:39.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | QuickRead | null | QuickRead/fine-tune-Pegasus | 6 | null | transformers | 14,996 | ---
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: fine-tune-Pegasus
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 17.993
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-Pegasus
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3242
- Rouge1: 17.993
- Rouge2: 2.9392
- Rougel: 12.313
- Rougelsum: 13.3091
- Gen Len: 67.0552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Ruizhou/bert-base-uncased-finetuned-cola | 841a6adce39fab659f0319caf427e73857849c09 | 2021-10-03T07:10:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Ruizhou | null | Ruizhou/bert-base-uncased-finetuned-cola | 6 | null | transformers | 14,997 | Entry not found |
RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt-lm | 5fe0a34b3bed78e605b54ba118c918cec24e6cb9 | 2022-03-24T11:57:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | RuudVelo | null | RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt-lm | 6 | null | transformers | 14,998 | ---
language:
- mt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- mt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-1b-cv8-mt-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mt
metrics:
- name: Test WER
type: wer
value: 15.88
- name: Test CER
type: cer
value: 3.65
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mt
metrics:
- name: Test WER
type: wer
value: null
- name: Test CER
type: cer
value: null
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-cv8-mt-lm
This model is a fine-tuned version of [wav2vec2-large-xls-r-1b-cv8-mt-lm](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice 8 dataset.
It achieves the following results on the test set:
- Loss: 0.2210
- Wer: 0.1974
Note that the above test results come from the original model without LM (language model) which can be found at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt. The results with the LM model can be found on the right side of this model card.
## Model description
Model RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt which has been improved with a KenLM 3-gram.
## Intended uses & limitations
More information needed
## Training and evaluation data
Common Voice 8 mt dataset has been used for the model
## Training procedure
### Training hyperparameters
The following config and hyperparameters were used during training:
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-xls-r-1b",
attention_dropout=0.05,
hidden_dropout=0.05,
feat_proj_dropout=0.05,
mask_time_prob=0.55,
mask_feature_prob=0.10,
layerdrop=0.05,
ctc_zero_infinity=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir=repo_name,
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=50,
gradient_checkpointing=True,
fp16=True,
save_steps=400,
eval_steps=400,
logging_steps=400,
learning_rate=5.5e-05,
warmup_steps=500,
save_total_limit=2,
push_to_hub=True,
report_to="tensorboard")
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 |
SCORE/claim2-distilbert-base-uncased | 0bfdbfa2862a08085393714542bcf2126d877969 | 2021-12-14T16:45:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | SCORE | null | SCORE/claim2-distilbert-base-uncased | 6 | null | transformers | 14,999 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.