modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-ru-he | b774dbdd50d1d357c48ad8c3f0d762ad53783c64 | 2020-10-26T14:35:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-he | 3 | null | transformers | 20,700 | ---
language:
- ru
- he
tags:
- translation
license: apache-2.0
---
### ru-he
* source group: Russian
* target group: Hebrew
* OPUS readme: [rus-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-heb/README.md)
* model: transformer
* source language(s): rus
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.heb | 36.1 | 0.569 |
### System Info:
- hf_name: ru-he
- source_languages: rus
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'he']
- src_constituents: ('Russian', {'rus'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: rus-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.test.txt
- src_alpha3: rus
- tgt_alpha3: heb
- chrF2_score: 0.569
- bleu: 36.1
- brevity_penalty: 0.9990000000000001
- ref_len: 15028.0
- src_name: Russian
- tgt_name: Hebrew
- train_date: 2020-10-04 00:00:00
- src_alpha2: ru
- tgt_alpha2: he
- prefer_old: False
- short_pair: ru-he
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
- port_machine: LM0-400-22516.local
- port_time: 2020-10-26-16:16 |
Helsinki-NLP/opus-mt-sv-bzs | 57316a9ece186b6ab1f05aec0d666d1ce42d61d1 | 2021-09-10T14:05:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"bzs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-bzs | 3 | null | transformers | 20,701 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-bzs
* source languages: sv
* target languages: bzs
* OPUS readme: [sv-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.bzs | 29.4 | 0.484 |
|
Helsinki-NLP/opus-mt-sv-gil | f33ae26b5f66f3df6b1c874f5773c819a033937c | 2021-09-10T14:06:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"gil",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-gil | 3 | null | transformers | 20,702 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-gil
* source languages: sv
* target languages: gil
* OPUS readme: [sv-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-gil/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-gil/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-gil/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.gil | 28.9 | 0.520 |
|
Helsinki-NLP/opus-mt-sv-pag | fe81b6758992b9d8a4a1cb059b22eb8cb457eb4a | 2021-09-10T14:08:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"pag",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-pag | 3 | null | transformers | 20,703 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-pag
* source languages: sv
* target languages: pag
* OPUS readme: [sv-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-pag/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pag/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pag/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.pag | 29.3 | 0.522 |
|
Helsinki-NLP/opus-mt-sv-rnd | efbcb2ca58c23d33d3904b24f237b3740e98f7a3 | 2021-09-10T14:08:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"rnd",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-rnd | 3 | null | transformers | 20,704 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-rnd
* source languages: sv
* target languages: rnd
* OPUS readme: [sv-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-rnd/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-rnd/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-rnd/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-rnd/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.rnd | 20.3 | 0.433 |
|
Helsinki-NLP/opus-mt-sv-sv | 58b0fcea2bbc4be0da61aa888e86333f50423736 | 2021-09-10T14:09:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-sv | 3 | null | transformers | 20,705 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-sv
* source languages: sv
* target languages: sv
* OPUS readme: [sv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sv/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sv/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sv/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.sv.sv | 49.2 | 0.741 |
|
Helsinki-NLP/opus-mt-sv-ts | 8f447ae0774f741b954ed8615bdc6047fa0051e6 | 2021-09-10T14:10:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"ts",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-ts | 3 | null | transformers | 20,706 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ts
* source languages: sv
* target languages: ts
* OPUS readme: [sv-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ts/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ts/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ts/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ts/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ts | 34.4 | 0.567 |
|
Helsinki-NLP/opus-mt-tl-pt | a2781bdf8bdd0da6b85c0b0e1c70813ae688826c | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tl",
"pt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tl-pt | 3 | null | transformers | 20,707 | ---
language:
- tl
- pt
tags:
- translation
license: apache-2.0
---
### tgl-por
* source group: Tagalog
* target group: Portuguese
* OPUS readme: [tgl-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md)
* model: transformer-align
* source language(s): tgl_Latn
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tgl.por | 28.8 | 0.522 |
### System Info:
- hf_name: tgl-por
- source_languages: tgl
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tl', 'pt']
- src_constituents: {'tgl_Latn'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt
- src_alpha3: tgl
- tgt_alpha3: por
- short_pair: tl-pt
- chrF2_score: 0.522
- bleu: 28.8
- brevity_penalty: 0.981
- ref_len: 12826.0
- src_name: Tagalog
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: tl
- tgt_alpha2: pt
- prefer_old: False
- long_pair: tgl-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-tr-lt | 4775ec98db87812658e34adf4ca2f05e20303a61 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"lt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tr-lt | 3 | null | transformers | 20,708 | ---
language:
- tr
- lt
tags:
- translation
license: apache-2.0
---
### tur-lit
* source group: Turkish
* target group: Lithuanian
* OPUS readme: [tur-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md)
* model: transformer-align
* source language(s): tur
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.lit | 35.6 | 0.631 |
### System Info:
- hf_name: tur-lit
- source_languages: tur
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'lt']
- src_constituents: {'tur'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt
- src_alpha3: tur
- tgt_alpha3: lit
- short_pair: tr-lt
- chrF2_score: 0.631
- bleu: 35.6
- brevity_penalty: 0.9490000000000001
- ref_len: 8285.0
- src_name: Turkish
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: tr
- tgt_alpha2: lt
- prefer_old: False
- long_pair: tur-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-tum-sv | afd5a8c27c991f569042aabf09947cdc08abb7b6 | 2021-09-11T10:50:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tum",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tum-sv | 3 | null | transformers | 20,709 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tum-sv
* source languages: tum
* target languages: sv
* OPUS readme: [tum-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tum.sv | 23.3 | 0.410 |
|
Helsinki-NLP/opus-mt-uk-bg | 6e01dde1b16917e97377ba639b1871b440660b35 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"bg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-bg | 3 | null | transformers | 20,710 | ---
language:
- uk
- bg
tags:
- translation
license: apache-2.0
---
### ukr-bul
* source group: Ukrainian
* target group: Bulgarian
* OPUS readme: [ukr-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-bul/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.bul | 55.7 | 0.734 |
### System Info:
- hf_name: ukr-bul
- source_languages: ukr
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'bg']
- src_constituents: {'ukr'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: bul
- short_pair: uk-bg
- chrF2_score: 0.7340000000000001
- bleu: 55.7
- brevity_penalty: 0.976
- ref_len: 5181.0
- src_name: Ukrainian
- tgt_name: Bulgarian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: bg
- prefer_old: False
- long_pair: ukr-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-uk-ca | 9fcfca52698b28f464bd0daf015c7920071880af | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"ca",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-ca | 3 | null | transformers | 20,711 | ---
language:
- uk
- ca
tags:
- translation
license: apache-2.0
---
### ukr-cat
* source group: Ukrainian
* target group: Catalan
* OPUS readme: [ukr-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-cat/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.cat | 33.7 | 0.538 |
### System Info:
- hf_name: ukr-cat
- source_languages: ukr
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'ca']
- src_constituents: {'ukr'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.test.txt
- src_alpha3: ukr
- tgt_alpha3: cat
- short_pair: uk-ca
- chrF2_score: 0.5379999999999999
- bleu: 33.7
- brevity_penalty: 0.972
- ref_len: 2670.0
- src_name: Ukrainian
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: uk
- tgt_alpha2: ca
- prefer_old: False
- long_pair: ukr-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-war-fr | f8ab22aa6772151777b5a599be0c7b43b3e6e061 | 2021-09-11T10:52:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"war",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-war-fr | 3 | null | transformers | 20,712 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-war-fr
* source languages: war
* target languages: fr
* OPUS readme: [war-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.fr | 30.2 | 0.482 |
|
Helsinki-NLP/opus-mt-yo-sv | a7359326d804a72808d486dff245359070ebfb4e | 2021-09-11T10:53:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"yo",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-yo-sv | 3 | null | transformers | 20,713 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yo-sv
* source languages: yo
* target languages: sv
* OPUS readme: [yo-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.sv | 25.2 | 0.434 |
|
Helsinki-NLP/opus-mt-zai-es | 80a82a186c31e94d411c6c2bd5ef3a8906e1f69c | 2021-09-11T10:53:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zai",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zai-es | 3 | null | transformers | 20,714 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-zai-es
* source languages: zai
* target languages: es
* OPUS readme: [zai-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zai-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zai.es | 20.8 | 0.372 |
|
Helsinki-NLP/opus-mt-zne-fr | 9409705f9da9d04981ec044d8b32c3d46775c5e5 | 2021-09-11T10:53:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zne",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zne-fr | 3 | null | transformers | 20,715 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-zne-fr
* source languages: zne
* target languages: fr
* OPUS readme: [zne-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.fr | 25.3 | 0.416 |
|
Hoang/distilbert-base-uncased-finetuned-squad | ff3caff2d935f33346fad042d72d3b2cfec6b540 | 2021-09-02T07:32:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Hoang | null | Hoang/distilbert-base-uncased-finetuned-squad | 3 | null | transformers | 20,716 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2176 | 1.0 | 5533 | 1.1429 |
| 0.9425 | 2.0 | 11066 | 1.1196 |
| 0.7586 | 3.0 | 16599 | 1.1582 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
HoeioUser/kod | 517fec8ff7d6e4e209f254397f8719c293a99dbd | 2022-01-23T23:23:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | HoeioUser | null | HoeioUser/kod | 3 | null | transformers | 20,717 | KOD file |
Humair/all-mpnet-base-v2-finetuned-v2 | 8bb08950736f608a4dba7c0bbf8047e255dbb459 | 2022-01-11T12:26:56.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | Humair | null | Humair/all-mpnet-base-v2-finetuned-v2 | 3 | null | sentence-transformers | 20,718 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Humair/all-mpnet-base-v2-finetuned-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Humair/all-mpnet-base-v2-finetuned-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Humair/all-mpnet-base-v2-finetuned-v2')
model = AutoModel.from_pretrained('Humair/all-mpnet-base-v2-finetuned-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Humair/all-mpnet-base-v2-finetuned-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.0 | fb536ca32d80946a6fb5043b34b63ba33dbe61d2 | 2021-11-14T07:47:10.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.0 | 3 | null | transformers | 20,719 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.2-concept-extraction-iir-v1.2 | 5ccf876861dfddd53758f661d08d6214bd6de74c | 2021-11-18T02:46:24.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.2-concept-extraction-iir-v1.2 | 3 | null | transformers | 20,720 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-iir-v1.0-concept-extraction-wikipedia-v1.0 | 7f80ff28c1f6cef267f85b30740267fe0950607f | 2021-11-01T17:23:38.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-iir-v1.0-concept-extraction-wikipedia-v1.0 | 3 | null | transformers | 20,721 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-iir-v1.0 | 906eff08dd8c225415130c08fd759dc42dd9136c | 2021-09-04T20:57:35.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-iir-v1.0 | 3 | null | transformers | 20,722 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-iir-v1.3 | 5ee52f01284f671d282774029e9188479b38b8f4 | 2021-11-17T00:45:02.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-iir-v1.3 | 3 | null | transformers | 20,723 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.0 | 5f9c281ec0a1d04752b04ece82d6cac2bdaad790 | 2021-10-27T19:05:59.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.0 | 3 | null | transformers | 20,724 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.1 | 6eeea7e1dadba92e010c882f8c362c89878ce51c | 2021-11-12T05:19:08.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.1 | 3 | null | transformers | 20,725 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.2 | b9a3c18c6e01b98c121444506367cbb65d2bf74f | 2021-11-16T13:11:00.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.2 | 3 | null | transformers | 20,726 | Entry not found |
HypNyx/DialoGPT-small-DwightBot | e33fde7166eb59a22edfeb5d7662284ceb29397f | 2021-09-05T21:22:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | HypNyx | null | HypNyx/DialoGPT-small-DwightBot | 3 | null | transformers | 20,727 | ---
tags:
- conversational
---
#DwightSchrute DialoGPT-Model
#TheOffice |
IDEA-CCNL/Yuyuan-GPT2-3.5B | f5d254253b34ecf2dd9cd728fc8dd93ba9de28ad | 2022-04-12T02:06:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"license:apache-2.0"
] | text-generation | false | IDEA-CCNL | null | IDEA-CCNL/Yuyuan-GPT2-3.5B | 3 | 2 | transformers | 20,728 | ---
language:
- en
inference: false
license: apache-2.0
---
# Yuyuan-GPT2-3.5B model (Medical),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
As we all know, the single direction language model based on decoder structure has strong generation ability, such as GPT model.
The 3.5 billion parameter Yuyuan-GPT2-3.5B large model, **using 50GB medical(pubmed) data, 32 A100 training for 7 days**, is the **largest open source GPT2 large model of medical.**
Our model has nearly **90% accuracy in fact judgment in the medical field**.
We use the PPL(perplexity) output by Yuyuan-GPT2-3.5B to realize fact judgment, and use the sentence pattern transformation from interrogative sentence to declarative sentence to realize medical question and answer.
More possibilities are waiting for you to find out.
## Usage
### load model
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Yuyuan-GPT2-3.5B')
model = GPT2Model.from_pretrained('IDEA-CCNL/Yuyuan-GPT2-3.5B')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### generation
```python
from transformers import pipeline, set_seed
set_seed(55)
generator = pipeline('text-generation', model='IDEA-CCNL/Yuyuan-GPT2-3.5B')
generator("Diabetics should not eat", max_length=30, num_return_sequences=1)
```
## example

## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
Ife/ES-PT | 6e099ee0f8c28e8d769b39b23411e514d8214f56 | 2021-09-16T04:42:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ife | null | Ife/ES-PT | 3 | null | transformers | 20,729 | Entry not found |
IlyaGusev/gen_title_tg_bottleneck | 6d4cda9c067ad17dbec24879d005d986351a9853 | 2020-11-28T11:45:25.000Z | [
"pytorch",
"encoder-decoder",
"transformers"
] | null | false | IlyaGusev | null | IlyaGusev/gen_title_tg_bottleneck | 3 | null | transformers | 20,730 | Entry not found |
Irina/trans_cyoa_rollouted | f5b05eb6a3d7e8f4ab8ec043df5ffc1da3c85d5e | 2021-12-20T11:36:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Irina | null | Irina/trans_cyoa_rollouted | 3 | null | transformers | 20,731 | Entry not found |
Iskaj/w2v-xlsr-dutch-lm | ad11e64040f5162700957d9050810632efa43b59 | 2022-01-27T13:41:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Iskaj | null | Iskaj/w2v-xlsr-dutch-lm | 3 | null | transformers | 20,732 | Model cloned from https://huggingface.co/facebook/wav2vec2-large-xlsr-53-dutch
Currently bugged: Logits size 48, vocab size 50 |
Iskaj/xlsr300m_cv_8.0_nl | f9a12bb388ed2f65fb55012cf2f9b0d2a56fec2a | 2022-03-24T11:53:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Iskaj | null | Iskaj/xlsr300m_cv_8.0_nl | 3 | null | transformers | 20,733 | ---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- mozilla-foundation/common_voice_7_0
- nl
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Dutch
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8 NL
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 46.94
- name: Test CER
type: cer
value: 21.65
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: ???
- name: Test CER
type: cer
value: ???
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 42.56
---
# xlsr300m_cv_8.0_nl
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Iskaj/xlsr300m_cv_8.0_nl --dataset mozilla-foundation/common_voice_8_0 --config nl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Iskaj/xlsr300m_cv_8.0_nl --dataset speech-recognition-community-v2/dev_data --config nl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "Iskaj/xlsr300m_cv_8.0_nl"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "nl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
inputs = processor(resampled_audio, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
transcription[0].lower()
#'het kontine schip lag aangemeert in de aven'
```
|
JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b | b83a95301abd7c6b01436fde1228945e6f343584 | 2021-11-19T20:43:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | JazibEijaz | null | JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b | 3 | null | transformers | 20,734 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
name: bert-base-uncased-finetuned-semeval2020-task4b-append
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4b
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
- Loss: 0.6760
- Accuracy: 0.8760
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5016 | 1.0 | 688 | 0.3502 | 0.8600 |
| 0.2528 | 2.0 | 1376 | 0.5769 | 0.8620 |
| 0.0598 | 3.0 | 2064 | 0.6720 | 0.8700 |
| 0.0197 | 4.0 | 2752 | 0.6760 | 0.8760 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
JazibEijaz/bert-base-uncased-finetuned-swag-e1-b16-l5e5 | 33d4f215533361f1cc8095b5087b44a806efd9d9 | 2021-10-30T15:50:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:swag",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | JazibEijaz | null | JazibEijaz/bert-base-uncased-finetuned-swag-e1-b16-l5e5 | 3 | null | transformers | 20,735 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag-e1-b16-l5e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag-e1-b16-l5e5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5202
- Accuracy: 0.7997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.701 | 1.0 | 4597 | 0.5202 | 0.7997 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataALLQonly03 | c88e7e3f33d647b862cd9b076a2b9d11dc71bc80 | 2021-12-09T19:42:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly03 | 3 | null | transformers | 20,736 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALLQonly03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly03
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 24.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 435 | 2.0751 |
| 2.1982 | 2.0 | 870 | 2.0465 |
| 2.0841 | 3.0 | 1305 | 2.0420 |
| 2.0374 | 4.0 | 1740 | 2.0325 |
| 1.9731 | 5.0 | 2175 | 2.0075 |
| 1.9248 | 6.0 | 2610 | 2.0219 |
| 1.8848 | 7.0 | 3045 | 1.9770 |
| 1.8848 | 8.0 | 3480 | 2.0093 |
| 1.8419 | 9.0 | 3915 | 2.0298 |
| 1.804 | 10.0 | 4350 | 1.9681 |
| 1.7817 | 11.0 | 4785 | 1.9938 |
| 1.7472 | 12.0 | 5220 | 1.9654 |
| 1.7075 | 13.0 | 5655 | 1.9797 |
| 1.6976 | 14.0 | 6090 | 1.9691 |
| 1.6748 | 15.0 | 6525 | 1.9568 |
| 1.6748 | 16.0 | 6960 | 1.9618 |
| 1.6528 | 17.0 | 7395 | 1.9843 |
| 1.6335 | 18.0 | 7830 | 1.9265 |
| 1.6179 | 19.0 | 8265 | 1.9598 |
| 1.5992 | 20.0 | 8700 | 1.9331 |
| 1.583 | 21.0 | 9135 | 1.9795 |
| 1.5699 | 22.0 | 9570 | 2.0073 |
| 1.5703 | 23.0 | 10005 | 1.9308 |
| 1.5703 | 24.0 | 10440 | 1.9285 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataALLQonly04 | 2563507345f35fcf323502127924e705ec73a15e | 2021-12-09T20:40:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly04 | 3 | null | transformers | 20,737 | Entry not found |
Jeska/BertjeWDialDataALLQonly07 | 1a9fd2a0f4586831578230eacede17335f4195ef | 2021-12-11T05:43:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly07 | 3 | null | transformers | 20,738 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALLQonly07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly07
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 18.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3589 | 1.0 | 871 | 2.2805 |
| 2.2563 | 2.0 | 1742 | 2.2501 |
| 2.1936 | 3.0 | 2613 | 2.2419 |
| 2.11 | 4.0 | 3484 | 2.2301 |
| 2.0311 | 5.0 | 4355 | 2.2320 |
| 1.969 | 6.0 | 5226 | 2.2276 |
| 1.9148 | 7.0 | 6097 | 2.1621 |
| 1.8569 | 8.0 | 6968 | 2.1876 |
| 1.7978 | 9.0 | 7839 | 2.2011 |
| 1.7602 | 10.0 | 8710 | 2.1280 |
| 1.7166 | 11.0 | 9581 | 2.1644 |
| 1.6651 | 12.0 | 10452 | 2.1246 |
| 1.6141 | 13.0 | 11323 | 2.1264 |
| 1.5759 | 14.0 | 12194 | 2.1143 |
| 1.5478 | 15.0 | 13065 | 2.0982 |
| 1.5311 | 16.0 | 13936 | 2.0993 |
| 1.5187 | 17.0 | 14807 | 2.0979 |
| 1.4809 | 18.0 | 15678 | 2.0338 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
LysandreJik/torch-model-2 | 136b9af8ab587b896374b45a2e26784d3356df2b | 2021-06-28T13:57:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | LysandreJik | null | LysandreJik/torch-model-2 | 3 | null | transformers | 20,739 | Entry not found |
JimmyHodl/DialoGPT-medium | eb288b53a6f7298fb153da0c95aec284fa991fc9 | 2022-01-31T18:45:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | JimmyHodl | null | JimmyHodl/DialoGPT-medium | 3 | null | transformers | 20,740 | ---
tags:
- conversational
---
# Jimmy's character DialoGPT model |
Jipski/Flos_gpt-2_erw-02 | a6a485e2db5f707271f115cd8db3b8c2832a7373 | 2021-12-05T13:52:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Jipski | null | Jipski/Flos_gpt-2_erw-02 | 3 | null | transformers | 20,741 | Entry not found |
Jipski/Flos_gpt-2_erw | 3bd4949d3679a7aa9d7d86790a0a5557d4f36ade | 2021-11-27T13:15:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Jipski | null | Jipski/Flos_gpt-2_erw | 3 | null | transformers | 20,742 | Entry not found |
Jipski/MegStuart_gpt-2 | 99b41e2234d6580a28c05a0bd690b0b855233eb7 | 2021-12-05T14:56:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Jipski | null | Jipski/MegStuart_gpt-2 | 3 | null | transformers | 20,743 | Entry not found |
Jonesy/FG_OLD | db2eecb005d564af6090379445101132e9ee8d21 | 2022-04-25T23:50:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Jonesy | null | Jonesy/FG_OLD | 3 | null | transformers | 20,744 | ---
tags:
- conversational
---
# Family Guy DialoGPT Model |
Jongwon/t5-tiny-it | 0aea32e3fdf4ba40913ade3eb8296160c8283897 | 2021-09-23T07:03:31.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jongwon | null | Jongwon/t5-tiny-it | 3 | null | transformers | 20,745 | Entry not found |
JorisCos/ConvTasNet_Libri3Mix_sepclean_8k | d3182db7cc3ba70709f1d9c186bbc0df98e6c033 | 2021-09-23T15:49:06.000Z | [
"pytorch",
"dataset:Libri3Mix",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | JorisCos | null | JorisCos/ConvTasNet_Libri3Mix_sepclean_8k | 3 | null | asteroid | 20,746 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.581797049575108
si_sdr_imp: 11.977037288467368
sdr' 9.305885208641385
sdr_imp: 12.3943409734845
sir: 16.42030534048559
sir_imp: 19.508759460400984
sar: 10.641943911079238
sar_imp: -56.4345187842095
stoi: 0.8365148408724333
stoi_imp: 0.24401766199806396
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
Jzz/FidicBERT | 6fe124ec7bc99c21b0b88ed3de0733c80fef77a1 | 2021-09-16T03:15:55.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jzz | null | Jzz/FidicBERT | 3 | null | transformers | 20,747 | FidicBERT is a pre-trained language model to analyze legal text. It is built by further training the Roberta language model in the legal domain, using an extensive legal and contract corpus and thereby fine-tuning for classifying and clustering contractual documents.
|
KAIHATSU/DialoGPT-small-rick | a853790b874304ce8b10348b3b64e0cc68445684 | 2021-09-08T12:53:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KAIHATSU | null | KAIHATSU/DialoGPT-small-rick | 3 | null | transformers | 20,748 | ---
tags:
- conversational
---
#Rick DialoGPT Model |
KBLab/electra-base-swedish-cased-discriminator | 684bb70e503707559a168482c05bde1c2dbf75c9 | 2021-01-20T13:15:09.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | KBLab | null | KBLab/electra-base-swedish-cased-discriminator | 3 | null | transformers | 20,749 | Entry not found |
KBLab/wav2vec2-base-voxpopuli-sv-swedish | c86f21444eaf49ef77b477d14760880c7c60b464 | 2021-07-05T14:29:11.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:common_voice",
"dataset:NST Swedish ASR Database",
"transformers",
"audio",
"speech",
"voxpopuli",
"license:cc-by-nc-4.0",
"model-index"
] | automatic-speech-recognition | false | KBLab | null | KBLab/wav2vec2-base-voxpopuli-sv-swedish | 3 | null | transformers | 20,750 | ---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
#- cer
tags:
- audio
- automatic-speech-recognition
- speech
- voxpopuli
license: cc-by-nc-4.0
model-index:
- name: Wav2vec 2.0 base VoxPopuli-sv swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: NST Swedish ASR Database
metrics:
- name: Test WER
type: wer
value: 5.619804368919309
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 19.145252414798616
---
# Wav2vec 2.0 base-voxpopuli-sv-swedish
Finetuned version of Facebooks [VoxPopuli-sv base](https://huggingface.co/facebook/wav2vec2-base-sv-voxpopuli) model using NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **5.62%**, WER for Common Voice test set is **19.15%**.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-base-voxpopuli-sv-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-base-voxpopuli-sv-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` |
Kail91/DialoGPT-small-PeraltaBot | b74d64639a7658a92cd3d65a5e5b3d22067dfe8f | 2021-09-22T14:49:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kail91 | null | Kail91/DialoGPT-small-PeraltaBot | 3 | null | transformers | 20,751 | ---
tags:
- conversational
---
#Peralta DialoGPT Model |
KakoSi/opaazzi | 0c378d599198f67d8211c3977e119221051792af | 2021-07-16T09:00:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KakoSi | null | KakoSi/opaazzi | 3 | null | transformers | 20,752 | ---
tags:
- conversational
---
# My Awesome Model |
KamrusSamad/bnbert | f760e85bfbf14cedce82535cc8cbafea8779c04d | 2022-03-15T20:13:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KamrusSamad | null | KamrusSamad/bnbert | 3 | null | transformers | 20,753 | Entry not found |
KamrusSamad/tiny_A-2_H-2 | 9378d6d9a7f346f27e375be0f9b1d3eeebcee59a | 2022-03-24T19:41:41.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"license:other",
"autotrain_compatible"
] | fill-mask | false | KamrusSamad | null | KamrusSamad/tiny_A-2_H-2 | 3 | null | transformers | 20,754 | ---
license: other
---
|
Karimfayed/pegasus-SAMSum | d5d71d926ace081828ef8e53633ed083a2b64400 | 2021-07-08T00:46:31.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Karimfayed | null | Karimfayed/pegasus-SAMSum | 3 | null | transformers | 20,755 | Entry not found |
Keqing/Keqing-Siesta | 36c38a11bb607e219dd448240d474287a45fc601 | 2022-01-23T06:16:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Keqing | null | Keqing/Keqing-Siesta | 3 | null | transformers | 20,756 | ---
tags:
- conversational
---
# Siesta |
Khanh/xlm-roberta-base-finetuned-squad | bcb1b08354653806c5e3e23c91172c4db54c3fff | 2022-01-04T17:49:35.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | Khanh | null | Khanh/xlm-roberta-base-finetuned-squad | 3 | null | transformers | 20,757 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7665 | 1.0 | 2295 | 0.5231 |
| 0.5236 | 2.0 | 4590 | 0.5539 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Khu1998/clog-clo-model | fcff0b25cc13e666768b5e583e69e9e480f03e6d | 2021-06-13T17:22:02.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Khu1998 | null | Khu1998/clog-clo-model | 3 | null | transformers | 20,758 | Entry not found |
KoichiYasuoka/roberta-base-thai-char | b9666835e51fc88ee633916c0df255e4e1bd9191 | 2022-02-19T07:37:57.000Z | [
"pytorch",
"roberta",
"fill-mask",
"th",
"transformers",
"thai",
"masked-lm",
"wikipedia",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-thai-char | 3 | null | transformers | 20,759 | ---
language:
- "th"
tags:
- "thai"
- "masked-lm"
- "wikipedia"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
---
# roberta-base-thai-char
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts with character-wise embeddings to use BertTokenizerFast. You can fine-tune `roberta-base-thai-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-char")
```
|
Konstantinos/BERTaTweetGR | 5c863ed9e0afd19aa5309c3828cc09e01805bc5b | 2021-07-05T09:19:12.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"el",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Konstantinos | null | Konstantinos/BERTaTweetGR | 3 | null | transformers | 20,760 | ---
language: el
widget:
- text: "μπαινω στο <mask> και τι να δω."
---
# Α lite RoBERTa fill mask model trained mostly in greek tweets
The training dataset of this model consists of 23 million tweets in Greek, of approximately 5000 users in total, spanning from 2008 to 2018.
The model has been trained to support the work for the paper [Multimodal Hate Speech Detection in Greek Social Media](https://www.mdpi.com/2414-4088/5/7/34)
## Load the pretrained model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Konstantinos/BERTaTweetGR")
model = AutoModel.from_pretrained("Konstantinos/BERTaTweetGR")
```
|
Kyuyoung11/haremotions-v2 | 478738527b6936d35da7512777342e2ae24a82f0 | 2021-06-14T06:50:37.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | Kyuyoung11 | null | Kyuyoung11/haremotions-v2 | 3 | null | transformers | 20,761 | |
Leisa/distilbert-base-uncased-finetuned-imdb | 1b7a7ad16a0a54f5aa909d3f2802a3c92ab900ff | 2021-11-20T12:12:24.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Leisa | null | Leisa/distilbert-base-uncased-finetuned-imdb | 3 | null | transformers | 20,762 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5561 | 1.0 | 782 | 2.3738 |
| 2.4474 | 2.0 | 1564 | 2.3108 |
| 2.4037 | 3.0 | 2346 | 2.3017 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Leisa/dummy-model | b5185ed3daffb2a7d0b996986a984cba792be3d1 | 2021-11-08T08:42:16.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Leisa | null | Leisa/dummy-model | 3 | null | transformers | 20,763 | Entry not found |
LenaSchmidt/distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv | 5f7aeb93d7979204d0b921a6df6ec1604c9d71b8 | 2022-02-18T16:02:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | LenaSchmidt | null | LenaSchmidt/distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv | 3 | null | transformers | 20,764 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.25 | 1.0 | 1273 | 0.8052 |
| 1.1199 | 2.0 | 2546 | 0.7950 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
LenaT/distilgpt2-finetuned-wikitext2 | 1555eade3332b015f6a0210a8e070618c2f5549a | 2021-10-05T12:32:43.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | LenaT | null | LenaT/distilgpt2-finetuned-wikitext2 | 3 | null | transformers | 20,765 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
Leostronkest/DialoGPT | bd17d5433a2781985751bf8f4ed059917d18d647 | 2022-02-15T21:59:14.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"arxiv:1911.00536",
"transformers",
"conversational",
"license:mit"
] | conversational | false | Leostronkest | null | Leostronkest/DialoGPT | 3 | null | transformers | 20,766 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
LeverageX/finbert-wechsel-korean | d386d583bc623aa8557b663aa37540acad25dfbf | 2022-01-18T17:40:02.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | LeverageX | null | LeverageX/finbert-wechsel-korean | 3 | null | transformers | 20,767 | Entry not found |
LeverageX/scibert-wechsel-korean | 7b949b913aab3426dd2a7616da9ad1e3b47c4648 | 2022-01-08T12:14:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | LeverageX | null | LeverageX/scibert-wechsel-korean | 3 | null | transformers | 20,768 | # scibert-wechsel-korean
Scibert(🇺🇸) converted into Korean(🇰🇷) using WECHSEL technique.
### Description
- SciBERT is trained on papers from the corpus of semanticscholar.org. Corpus size is 1.14M papers, 3.1B tokens.
- Wechsel is converting embedding layer's subword tokens from source language to target language.
- SciBERT trained with English language is converted into Korean langauge using Wechsel technique.
- Korean tokenizer is selected with KLUE PLMs' tokenizers due to its similar vocab size(32000) and performance.
### Reference
- [Scibert](https://github.com/allenai/scibert)
- [WECHSEL](https://github.com/CPJKU/wechsel)
- [Korean Language Understanding Evaluation](https://github.com/KLUE-benchmark/KLUE) |
LucasS/bigbirdABSA | 0b7e574ec92e00c02887cc92ee4804f4151158e2 | 2021-09-03T00:34:10.000Z | [
"pytorch",
"big_bird",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | LucasS | null | LucasS/bigbirdABSA | 3 | null | transformers | 20,769 | Entry not found |
M-FAC/bert-tiny-finetuned-squadv2 | 75ea4dc51c61107e50485e8d94d7724883b0808f | 2021-12-13T08:14:11.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2107.03356",
"transformers",
"autotrain_compatible"
] | question-answering | false | M-FAC | null | M-FAC/bert-tiny-finetuned-squadv2 | 3 | null | transformers | 20,770 | # BERT-tiny model finetuned with M-FAC
This model is finetuned on SQuAD version 2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SQuAD version 2 validation set:
```bash
exact_match = 50.29
f1 = 52.43
```
Mean and standard deviation for 5 runs on SQuAD version 2 validation set:
| | Exact Match | F1 |
|:----:|:-----------:|:----:|
| Adam | 48.41 ± 0.57 | 49.99 ± 0.54 |
| M-FAC | 49.80 ± 0.43 | 52.18 ± 0.20 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_qa.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--dataset_name squad_v2 \
--version_2_with_negative \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 1e-4 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
ML-ass/german_encoder | 91131f8a9e6156fd232d16ca3f068b974e9130c3 | 2021-07-02T15:54:38.000Z | [
"pytorch",
"bert",
"pretraining",
"transformers"
] | null | false | ML-ass | null | ML-ass/german_encoder | 3 | null | transformers | 20,771 | Entry not found |
MMG/bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad | bc11a4b367f4bd580912620120a18afdf8e925bc | 2021-12-15T12:03:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"es",
"dataset:squad_es",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | MMG | null | MMG/bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad | 3 | null | transformers | 20,772 | ---
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad
This model is a fine-tuned version of [MMG/bert-base-spanish-wwm-cased-finetuned-sqac](https://huggingface.co/MMG/bert-base-spanish-wwm-cased-finetuned-sqac) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5325
- {'exact_match': 60.30274361400189, 'f1': 77.01962587890856}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac | b6913cf9d995b41054718b7e9a9f5f9984f334fa | 2021-12-27T17:33:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"es",
"dataset:sqac",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | MMG | null | MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac | 3 | null | transformers | 20,773 | ---
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac
This model is a fine-tuned version of [ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es](https://huggingface.co/ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9263
- {'exact_match': 65.55793991416309, 'f1': 82.72322701572416}
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
MYX4567/dummy-model | 39748a079ea57a70301237b9e7f58b874d4d0db6 | 2021-07-13T07:05:41.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | MYX4567 | null | MYX4567/dummy-model | 3 | null | transformers | 20,774 | Entry not found |
Mads/xlsr-demo | 32440ca528cf196d11dd44c428f483dcd45708a0 | 2021-07-05T15:30:59.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Mads | null | Mads/xlsr-demo | 3 | null | transformers | 20,775 | Entry not found |
Maniac/wav2vec2-xls-r-60-urdu | 33a4f1ebdaff3f6c5ad28e91ea4d52df71fabe30 | 2022-01-28T13:03:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Maniac | null | Maniac/wav2vec2-xls-r-60-urdu | 3 | null | transformers | 20,776 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8433
- Wer: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.468 | 166.67 | 500 | 3.0262 | 1.0035 |
| 0.0572 | 333.33 | 1000 | 3.5352 | 0.9721 |
| 0.0209 | 500.0 | 1500 | 3.7266 | 0.9834 |
| 0.0092 | 666.67 | 2000 | 3.8433 | 0.9852 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
MarcBrun/ixambert-finetuned-squad-eu | 9f81c71a74ac6dbb92bb1f18c55bb018bc50d4c5 | 2022-02-23T20:21:21.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"es",
"eu",
"transformers",
"autotrain_compatible"
] | question-answering | false | MarcBrun | null | MarcBrun/ixambert-finetuned-squad-eu | 3 | null | transformers | 20,777 | ---
language:
- en
- es
- eu
widget:
- text: "When was Florence Nightingale born?"
context: "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820."
example_title: "English"
- text: "¿Por qué provincias pasa el Tajo?"
context: "El Tajo es el río más largo de la península ibérica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinación hacia el suroeste, que se acentúa cuando llega a Portugal, donde recibe el nombre de Tejo.
Nace en los montes Universales, en la sierra de Albarracín, sobre la rama occidental del sistema Ibérico y, después de recorrer 1007 km, llega al océano Atlántico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m³/s. En sus primeros 816 km atraviesa España, donde discurre por cuatro comunidades autónomas (Aragón, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y Cáceres)."
example_title: "Español"
- text: "Zer beste izenak ditu Tartalo?"
context: "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote."
example_title: "Euskara"
---
# ixambert-base-cased finetuned for QA
This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions.
## Overview
* **Language model:** ixambert-base-cased
* **Languages:** English, Spanish and Basque
* **Downstream task:** Extractive QA
* **Training data:** Experimental SQuAD1.1 in Basque
* **Eval data:** Experimental SQuAD1.1 in Basque
* **Infrastructure:** 1x GeForce RTX 2080
## Outputs
The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
```python
{'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
```
## How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "MarcBrun/ixambert-finetuned-squad-eu"
# To get predictions
context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
question = "When was Florence Nightingale born?"
qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
pred = qa(question=question,context=context)
# To load the model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Hyperparameters
```
batch_size = 8
n_epochs = 3
learning_rate = 2e-5
optimizer = AdamW
lr_schedule = linear
max_seq_len = 384
doc_stride = 128
``` |
Marxav/wav2vec2-large-xlsr-53-breton | 9783f00d56032a35a741b0dedbd12e91dcd868db | 2021-07-05T15:34:21.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"br",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Marxav | null | Marxav/wav2vec2-large-xlsr-53-breton | 3 | null | transformers | 20,778 | ---
language: br
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Breton by Marxav
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice br
type: common_voice
args: br
metrics:
- name: Test WER
type: wer
value: 43.43
---
# wav2vec2-large-xlsr-53-breton
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
lang = "br"
test_dataset = load_dataset("common_voice", lang, split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Marxav/wav2vec2-large-xlsr-53-breton")
model = Wav2Vec2ForCTC.from_pretrained("Marxav/wav2vec2-large-xlsr-53-breton")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = re.sub("ʼ", "'", batch["sentence"])
batch["sentence"] = re.sub("’", "'", batch["sentence"])
batch["sentence"] = re.sub('‘', "'", batch["sentence"])
return batch
nb_samples = 2
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:nb_samples], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:nb_samples])
```
The above code leads to the following prediction for the first two samples:
* Prediction: ["neller ket dont a-benn eus netra la vez ser merc'hed evel sich", 'an eil hag egile']
* Reference: ["N'haller ket dont a-benn eus netra pa vezer nec'het evel-se.", 'An eil hag egile.']
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import re
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
lang = 'br'
test_dataset = load_dataset("common_voice", lang, split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('Marxav/wav2vec2-large-xlsr-53-breton')
model = Wav2Vec2ForCTC.from_pretrained('Marxav/wav2vec2-large-xlsr-53-breton')
model.to("cuda")
chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = re.sub("ʼ", "'", batch["sentence"])
batch["sentence"] = re.sub("’", "'", batch["sentence"])
batch["sentence"] = re.sub('‘', "'", batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(remove_special_characters)
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 43.43%
## Training
The Common Voice `train`, `validation` datasets were used for training. |
Matthijsvanhof/bert-base-dutch-cased-finetuned-mBERT | 5b6569c1c87a1a1d7bb330a18c73eafa4e9cc65c | 2021-11-28T18:03:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Matthijsvanhof | null | Matthijsvanhof/bert-base-dutch-cased-finetuned-mBERT | 3 | null | transformers | 20,779 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-dutch-cased-finetuned-mBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-mBERT
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0898
- Precision: 0.7255
- Recall: 0.7255
- F1: 0.7255
- Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1603 | 1.0 | 533 | 0.0928 | 0.6896 | 0.6962 | 0.6929 | 0.9742 |
| 0.0832 | 2.0 | 1066 | 0.0898 | 0.7255 | 0.7255 | 0.7255 | 0.9758 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian | aed75732e1cc15b7bc3a91821f1c1624966d0bcd | 2021-07-05T16:05:44.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ka",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | MehdiHosseiniMoghadam | null | MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian | 3 | null | transformers | 20,780 | ---
language: ka
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-Georgian by Mehdi Hosseini Moghadam
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ka
type: common_voice
args: ka
metrics:
- name: Test WER
type: wer
value: 60.504024
---
# wav2vec2-large-xlsr-53-Georgian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ka", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ka", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 60.504024 %
## Training
The Common Voice `train`, `validation` datasets were used for training. |
MickyMike/codebert-c | 029928b2be6428d46c69c43f0dcd0f991ae36da9 | 2021-11-01T02:04:30.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | MickyMike | null | MickyMike/codebert-c | 3 | null | transformers | 20,781 | Entry not found |
Midhunkrishna/DialoGPT-small-bjk | d6c47dd72a94516de777af15bf1a9de31d857551 | 2021-09-03T11:58:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Midhunkrishna | null | Midhunkrishna/DialoGPT-small-bjk | 3 | null | transformers | 20,782 | ---
tags:
- conversational
--- |
Mierln/SmartHarry | bcf61e9202d93b3f3da4c251c094b60314063f40 | 2021-08-27T04:10:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Mierln | null | Mierln/SmartHarry | 3 | null | transformers | 20,783 | ---
tags:
- conversational
---
#harry |
Mirjam/test-finetuned | e4075350f1f853ba6b7a73d12aadc975519f0afe | 2022-01-20T15:14:18.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Mirjam | null | Mirjam/test-finetuned | 3 | null | transformers | 20,784 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-finetuned
This model is a fine-tuned version of [yhavinga/t5-v1.1-base-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cnn-test) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 1 | nan | 33.8462 | 31.746 | 30.7692 | 30.7692 | 86.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
MistahCase/distilroberta-base-testingSB | db0fea84dc13eb888b9923d4643bd42c48148e28 | 2021-11-20T18:25:06.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | MistahCase | null | MistahCase/distilroberta-base-testingSB | 3 | null | transformers | 20,785 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-testingSB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-testingSB
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a company specific, Danish dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0403
## Model description
Customer-specific model used to embed asset management work orders in Danish
## Intended uses & limitations
Customer-specific and trained for unsupervised categorization tasks
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
Epoch Training Loss Validation Loss
1 0.988500 1.056376
2 0.996300 1.027803
3 0.990300 1.040270
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.98850 | 1.0 | 1461 | 1.5211 |
| 1.3179 | 2.0 | 2922 | 1.3314 |
| 1.1931 | 3.0 | 4383 | 1.2530 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Motahar/bert-base-cased-mahtab | 0d32e6c04771f128fbbe7403ca56ca252f82fb97 | 2021-12-30T16:24:19.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Motahar | null | Motahar/bert-base-cased-mahtab | 3 | null | transformers | 20,786 | Entry not found |
MrE/DialoGPT-medium-SARGE | cbe148bcd61c9311fcc906105949ebad846b10f5 | 2021-10-04T22:19:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MrE | null | MrE/DialoGPT-medium-SARGE | 3 | null | transformers | 20,787 | ---
tags:
- conversational
---
#Sarge |
Muennighoff/SGPT-1.3B-mean-nli | ca9c84a839fd4f59e6ef70265cd83e9d3af50c01 | 2022-02-21T06:17:16.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-1.3B-mean-nli | 3 | 1 | sentence-transformers | 20,788 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# SGPT-1.3B-mean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SGPT-125M-lasttoken-nli | 5f48a2059f3684f5deaa752dca56694d63a154e7 | 2022-02-21T06:18:46.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-125M-lasttoken-nli | 3 | null | sentence-transformers | 20,789 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-125M-lasttoken-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
MultiBertGunjanPatrick/multiberts-seed-0-1100k | b1d25bf7c74c6fabf47713e77a48002d1fe83765 | 2021-10-04T04:57:16.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-1100k | 3 | null | transformers | 20,790 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 1100k (uncased)
Seed 0 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1100k')
model = BertModel.from_pretrained("multiberts-seed-0-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-160k | 704d675dc040744dfb8a4132f33df05ba71b0feb | 2021-10-04T04:55:48.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-160k | 3 | null | transformers | 20,791 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 160k (uncased)
Seed 0 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-160k')
model = BertModel.from_pretrained("multiberts-seed-0-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-2000k | 48fd7cea92185792a7eced6404ea1f9b11dc861f | 2021-10-04T04:58:25.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-2000k | 3 | null | transformers | 20,792 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 2000k (uncased)
Seed 0 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-2000k')
model = BertModel.from_pretrained("multiberts-seed-0-2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-60k | 3ee7953568304a8e5bac51c36d601988b8b0c857 | 2021-10-04T04:55:12.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-60k | 3 | null | transformers | 20,793 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 60k (uncased)
Seed 0 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-60k')
model = BertModel.from_pretrained("multiberts-seed-0-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-800k | ce5a4a013364efae107fe7b5eee9b531f6cb3957 | 2021-10-04T04:56:53.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-800k | 3 | null | transformers | 20,794 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 800k (uncased)
Seed 0 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-800k')
model = BertModel.from_pretrained("multiberts-seed-0-800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-80k | 5432f4a7f3e001014405ba7068f372c5c0637a43 | 2021-10-04T04:55:19.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-80k | 3 | null | transformers | 20,795 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 80k (uncased)
Seed 0 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-80k')
model = BertModel.from_pretrained("multiberts-seed-0-80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-0k | dd0780f28417edfd551adab14c03a59004abf960 | 2021-10-04T04:58:32.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-0k | 3 | null | transformers | 20,796 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 0k (uncased)
Seed 1 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-0k')
model = BertModel.from_pretrained("multiberts-seed-1-0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-1100k | a547cc0d556572093f0f884c1c5006d1fe449be6 | 2021-10-04T05:00:55.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-1100k | 3 | null | transformers | 20,797 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 1100k (uncased)
Seed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1100k')
model = BertModel.from_pretrained("multiberts-seed-1-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-1600k | 091f2a471d2bd8a0a275888a1f54bdec9cfdcd18 | 2021-10-04T05:01:31.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-1600k | 3 | null | transformers | 20,798 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 1600k (uncased)
Seed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1600k')
model = BertModel.from_pretrained("multiberts-seed-1-1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-1700k | d2ff14add4acfe0c6e86db91c3673e8ddfa8e1d1 | 2021-10-04T05:01:38.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
] | null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-1700k | 3 | null | transformers | 20,799 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 1700k (uncased)
Seed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1700k')
model = BertModel.from_pretrained("multiberts-seed-1-1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.