modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-es-eo | 328750dfd8c4f2e8e6d7479e87f05ae2f4f95ba8 | 2021-09-09T21:42:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-eo | 19 | null | transformers | 8,500 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-eo
* source languages: es
* target languages: eo
* OPUS readme: [es-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.eo | 44.7 | 0.657 |
|
Helsinki-NLP/opus-mt-es-pap | 9bf9bd9f49dcd2195d87c7ed9a23f77757b2aa5f | 2021-09-09T21:44:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"pap",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-pap | 19 | null | transformers | 8,501 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-pap
* source languages: es
* target languages: pap
* OPUS readme: [es-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.pap | 28.2 | 0.486 |
|
Helsinki-NLP/opus-mt-kg-en | 2ae3fc0fcb26dd12365e7f258811e2e428eb4dcc | 2021-09-10T13:53:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"kg",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-kg-en | 19 | null | transformers | 8,502 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-kg-en
* source languages: kg
* target languages: en
* OPUS readme: [kg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kg-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kg.en | 35.4 | 0.508 |
|
Helsinki-NLP/opus-mt-kj-en | 45173abc2325ee785dba5f13d0b2187821c5dbba | 2021-09-10T13:53:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"kj",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-kj-en | 19 | null | transformers | 8,503 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-kj-en
* source languages: kj
* target languages: en
* OPUS readme: [kj-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kj-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/kj-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kj-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kj-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kj.en | 30.3 | 0.477 |
|
Helsinki-NLP/opus-mt-lt-es | 3b6375db9c99783dcf81185d7ec195e1c042287a | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lt",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lt-es | 19 | 1 | transformers | 8,504 | ---
language:
- lt
- es
tags:
- translation
license: apache-2.0
---
### lit-spa
* source group: Lithuanian
* target group: Spanish
* OPUS readme: [lit-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-spa/README.md)
* model: transformer-align
* source language(s): lit
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lit.spa | 50.5 | 0.680 |
### System Info:
- hf_name: lit-spa
- source_languages: lit
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lt', 'es']
- src_constituents: {'lit'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.test.txt
- src_alpha3: lit
- tgt_alpha3: spa
- short_pair: lt-es
- chrF2_score: 0.68
- bleu: 50.5
- brevity_penalty: 0.963
- ref_len: 2738.0
- src_name: Lithuanian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: lt
- tgt_alpha2: es
- prefer_old: False
- long_pair: lit-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-no-es | 86129a9d93281a20e0b866f623b16069db6de89c | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"no",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-no-es | 19 | null | transformers | 8,505 | ---
language:
- no
- es
tags:
- translation
license: apache-2.0
---
### nor-spa
* source group: Norwegian
* target group: Spanish
* OPUS readme: [nor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.spa | 34.2 | 0.565 |
### System Info:
- hf_name: nor-spa
- source_languages: nor
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'es']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: spa
- short_pair: no-es
- chrF2_score: 0.565
- bleu: 34.2
- brevity_penalty: 0.997
- ref_len: 7311.0
- src_name: Norwegian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: es
- prefer_old: False
- long_pair: nor-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-pis-en | f52fc9014a82ce8e4bd5fedc3999f81d21ec348a | 2021-09-10T14:00:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pis",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pis-en | 19 | null | transformers | 8,506 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pis-en
* source languages: pis
* target languages: en
* OPUS readme: [pis-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.en | 33.3 | 0.493 |
|
Helsinki-NLP/opus-mt-pqe-en | 2a3bb445918ac990acf6f8e396ad64596ffed886 | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fj",
"mi",
"ty",
"to",
"na",
"sm",
"mh",
"pqe",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pqe-en | 19 | null | transformers | 8,507 | ---
language:
- fj
- mi
- ty
- to
- na
- sm
- mh
- pqe
- en
tags:
- translation
license: apache-2.0
---
### pqe-eng
* source group: Eastern Malayo-Polynesian languages
* target group: English
* OPUS readme: [pqe-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pqe-eng/README.md)
* model: transformer
* source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.zip)
* test set translations: [opus-2020-06-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.test.txt)
* test set scores: [opus-2020-06-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fij-eng.fij.eng | 26.9 | 0.361 |
| Tatoeba-test.gil-eng.gil.eng | 49.0 | 0.618 |
| Tatoeba-test.haw-eng.haw.eng | 1.6 | 0.126 |
| Tatoeba-test.mah-eng.mah.eng | 13.7 | 0.257 |
| Tatoeba-test.mri-eng.mri.eng | 7.4 | 0.250 |
| Tatoeba-test.multi.eng | 12.6 | 0.268 |
| Tatoeba-test.nau-eng.nau.eng | 2.3 | 0.125 |
| Tatoeba-test.niu-eng.niu.eng | 34.4 | 0.471 |
| Tatoeba-test.rap-eng.rap.eng | 10.3 | 0.215 |
| Tatoeba-test.smo-eng.smo.eng | 28.5 | 0.413 |
| Tatoeba-test.tah-eng.tah.eng | 12.1 | 0.199 |
| Tatoeba-test.ton-eng.ton.eng | 41.8 | 0.517 |
| Tatoeba-test.tvl-eng.tvl.eng | 42.9 | 0.540 |
### System Info:
- hf_name: pqe-eng
- source_languages: pqe
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pqe-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']
- src_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.test.txt
- src_alpha3: pqe
- tgt_alpha3: eng
- short_pair: pqe-en
- chrF2_score: 0.268
- bleu: 12.6
- brevity_penalty: 1.0
- ref_len: 4568.0
- src_name: Eastern Malayo-Polynesian languages
- tgt_name: English
- train_date: 2020-06-28
- src_alpha2: pqe
- tgt_alpha2: en
- prefer_old: False
- long_pair: pqe-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-rnd-en | a0592c9da10200300f038ee7e916eed2e0fbd246 | 2021-09-10T14:01:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"rnd",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-rnd-en | 19 | null | transformers | 8,508 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-rnd-en
* source languages: rnd
* target languages: en
* OPUS readme: [rnd-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.en | 37.8 | 0.531 |
|
Helsinki-NLP/opus-mt-tl-es | a802dd67efb503718fa025a2c4e91fd026a5c1e9 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tl-es | 19 | null | transformers | 8,509 | ---
language:
- tl
- es
tags:
- translation
license: apache-2.0
---
### tgl-spa
* source group: Tagalog
* target group: Spanish
* OPUS readme: [tgl-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-spa/README.md)
* model: transformer-align
* source language(s): tgl_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tgl.spa | 31.6 | 0.531 |
### System Info:
- hf_name: tgl-spa
- source_languages: tgl
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tl', 'es']
- src_constituents: {'tgl_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.test.txt
- src_alpha3: tgl
- tgt_alpha3: spa
- short_pair: tl-es
- chrF2_score: 0.531
- bleu: 31.6
- brevity_penalty: 0.997
- ref_len: 4327.0
- src_name: Tagalog
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: tl
- tgt_alpha2: es
- prefer_old: False
- long_pair: tgl-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-tll-en | a8f4fe293754493a9385669a126f0f737efa5cf8 | 2021-09-11T10:48:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tll",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tll-en | 19 | null | transformers | 8,510 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tll-en
* source languages: tll
* target languages: en
* OPUS readme: [tll-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.en | 34.5 | 0.500 |
|
Helsinki-NLP/opus-mt-tpi-en | 9d106deeef1145ca9e034cb4ebae8d0545e98e7d | 2021-09-11T10:49:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tpi",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tpi-en | 19 | null | transformers | 8,511 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tpi-en
* source languages: tpi
* target languages: en
* OPUS readme: [tpi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tpi.en | 29.1 | 0.448 |
|
Holako/NER_CAMELBERT | b48ec7ee4d4655ef43cb611dcdd61a60db7411e7 | 2022-02-23T17:22:41.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Holako | null | Holako/NER_CAMELBERT | 19 | null | transformers | 8,512 | Testing NER |
JorgeSarry/est5base-simplify | cea9bbfa31d3993e411ac058d39b8eeede2c5997 | 2021-09-20T08:42:39.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"es",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | JorgeSarry | null | JorgeSarry/est5base-simplify | 19 | null | transformers | 8,513 | ---
language: es
---
This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish WikiEdits for sentence simplification.
You can use it with the command "simplify:"
|
Jorgeutd/sagemaker-roberta-base-emotion | 08d5c624b85f453bdf779fa2ebff3029d63c11c5 | 2021-12-06T16:57:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:emotion",
"transformers",
"sagemaker",
"roberta-base",
"text classification",
"license:apache-2.0",
"model-index"
] | text-classification | false | Jorgeutd | null | Jorgeutd/sagemaker-roberta-base-emotion | 19 | null | transformers | 8,514 |
---
language: en
widget:
- text: "I am really upset that I have to call up to three times to the number on the back of my insurance card for my call to be answer"
tags:
- sagemaker
- roberta-base
- text classification
license: apache-2.0
datasets:
- emotion
model-index:
- name: sagemaker-roberta-base-emotion
results:
- task:
name: Multi Class Text Classification
type: text-classification
dataset:
name: "emotion"
type: emotion
metrics:
- name: Validation Accuracy
type: accuracy
value: 94.1
- name: Validation F1
type: f1
value: 94.13
---
## roberta-base
This model is a fine-tuned model that was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
- Problem type: Multi Class Text Classification (emotion detection).
It achieves the following results on the evaluation set:
- Loss: 0.1613253802061081
- f1: 0.9413321705151999
## Hyperparameters
```json
{
"epochs": 10,
"train_batch_size": 16,
"learning_rate": 3e-5,
"weight_decay":0.01,
"load_best_model_at_end": true,
"model_name":"roberta-base",
"do_eval": True,
"load_best_model_at_end":True
}
```
## Validation Metrics
| key | value |
| --- | ----- |
| eval_accuracy | 0.941 |
| eval_f1 | 0.9413321705151999 |
| eval_loss | 0.1613253802061081|
| eval_recall | 0.941 |
| eval_precision | 0.9419519436781406 |
|
Littlemilk/autobiography-generator | 9342dd04520234a2502b65e3cc74f23d9fd59d3a | 2022-01-09T17:15:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"zh",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index"
] | text-generation | false | Littlemilk | null | Littlemilk/autobiography-generator | 19 | 2 | transformers | 8,515 | ---
language:
- zh
license: gpl-3.0
tags:
- generated_from_trainer
model-index:
- name: clm-total
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clm-total
This model is a fine-tuned version of [ckiplab/gpt2-base-chinese](https://huggingface.co/ckiplab/gpt2-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Luciano/bertimbau-large-lener_br | 867c6fe10f58d8394213e349917e6aaaf5baa85c | 2022-06-28T11:42:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"pt",
"dataset:lener_br",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Luciano | null | Luciano/bertimbau-large-lener_br | 19 | 1 | transformers | 8,516 | ---
language:
- pt
license: mit
tags:
- generated_from_trainer
datasets:
- lener_br
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bertimbau-large-lener_br
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lener_br
type: lener_br
args: lener_br
metric:
name: Accuracy
type: accuracy
value: 0.9801301293674859
model-index:
- name: Luciano/bertimbau-large-lener_br
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: lener_br
type: lener_br
config: lener_br
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9840898731012984
verified: true
- name: Precision
type: precision
value: 0.9895415357344292
verified: true
- name: Recall
type: recall
value: 0.9885856878370763
verified: true
- name: F1
type: f1
value: 0.9890633808488363
verified: true
- name: loss
type: loss
value: 0.10151929408311844
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertimbau-large-lener_br
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the lener_br dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1271
- Precision: 0.8965
- Recall: 0.9198
- F1: 0.9080
- Accuracy: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0674 | 1.0 | 1957 | 0.1349 | 0.7617 | 0.8710 | 0.8127 | 0.9594 |
| 0.0443 | 2.0 | 3914 | 0.1867 | 0.6862 | 0.9194 | 0.7858 | 0.9575 |
| 0.0283 | 3.0 | 5871 | 0.1185 | 0.8206 | 0.8766 | 0.8477 | 0.9678 |
| 0.0226 | 4.0 | 7828 | 0.1405 | 0.8072 | 0.8978 | 0.8501 | 0.9708 |
| 0.0141 | 5.0 | 9785 | 0.1898 | 0.7224 | 0.9194 | 0.8090 | 0.9629 |
| 0.01 | 6.0 | 11742 | 0.1655 | 0.9062 | 0.8856 | 0.8958 | 0.9741 |
| 0.012 | 7.0 | 13699 | 0.1271 | 0.8965 | 0.9198 | 0.9080 | 0.9801 |
| 0.0091 | 8.0 | 15656 | 0.1919 | 0.8890 | 0.8886 | 0.8888 | 0.9719 |
| 0.0042 | 9.0 | 17613 | 0.1725 | 0.8977 | 0.8985 | 0.8981 | 0.9744 |
| 0.0043 | 10.0 | 19570 | 0.1530 | 0.8878 | 0.9034 | 0.8955 | 0.9761 |
| 0.0042 | 11.0 | 21527 | 0.1635 | 0.8792 | 0.9108 | 0.8947 | 0.9774 |
| 0.0033 | 12.0 | 23484 | 0.2009 | 0.8155 | 0.9138 | 0.8619 | 0.9719 |
| 0.0008 | 13.0 | 25441 | 0.1766 | 0.8737 | 0.9135 | 0.8932 | 0.9755 |
| 0.0005 | 14.0 | 27398 | 0.1868 | 0.8616 | 0.9129 | 0.8865 | 0.9743 |
| 0.0014 | 15.0 | 29355 | 0.1910 | 0.8694 | 0.9101 | 0.8893 | 0.9746 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
Mary222/SBERBANK_RUS | c085a557fc5d571a451ea68f25af2d2233d7436d | 2021-11-04T16:30:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers"
] | text-generation | false | Mary222 | null | Mary222/SBERBANK_RUS | 19 | 1 | transformers | 8,517 | ---
language: ru
tags:
- text-generation
---
# GPT2 - RUS |
MohamedZaitoon/T5-CNN | bbcacff360925f67c7cf991f25ee7d0268cfcc6c | 2021-06-12T14:56:25.000Z | [
"pytorch",
"dataset:CNN/Daily-mail",
"summarization"
] | summarization | false | MohamedZaitoon | null | MohamedZaitoon/T5-CNN | 19 | null | null | 8,518 | ---
tags:
- summarization
datasets:
- CNN/Daily-mail
metrics:
- ROUGE
---
|
MrBananaHuman/kogpt_6b_fp16 | 6838fe6947a0f18817273922bd61280ac33f4e33 | 2021-11-19T06:23:58.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | MrBananaHuman | null | MrBananaHuman/kogpt_6b_fp16 | 19 | 4 | transformers | 8,519 | kakao brain에서 공개한 kogpt 6b model('kakaobrain/kogpt')을 fp16으로 저장한 모델입니다.
### 카카오브레인 모델을 fp16으로 로드하는 방법
```python
import torch
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained('kakaobrain/kogpt', cache_dir='./my_dir', revision='KoGPT6B-ryan1.5b', torch_dtype=torch.float16)
```
### fp16 모델 로드 후 문장 생성
[](https://colab.research.google.com/drive/1_rLDzhGohJPbOD5I_eTIOdx4aOTp43uK?usp=sharing)
```python
import torch
from transformers import GPTJForCausalLM, AutoTokenizer
model = GPTJForCausalLM.from_pretrained('MrBananaHuman/kogpt_6b_fp16', low_cpu_mem_usage=True))
model.to('cuda')
tokenizer = AutoTokenizer.from_pretrained('MrBananaHuman/kogpt_6b_fp16')
input_text = '이순신은'
input_ids = tokenizer(input_text, return_tensors='pt').input_ids.to('cuda')
output = model.generate(input_ids, max_length=64)
print(tokenizer.decode(output[0]))
>>> 이순신은 우리에게 무엇인가? 1. 머리말 이글은 임진왜란 당시 이순인이 보여준
```
### 참고 링크
https://github.com/kakaobrain/kogpt/issues/6?fbclid=IwAR1KpWhuHnevQvEWV18o16k2z9TLgrXkbWTkKqzL-NDXHfDnWcIq7I4SJXM |
NYTK/sentiment-hts2-xlm-roberta-hungarian | 9e78df9fa3fe207531cd8eaf27a80a23fcf3d9e4 | 2022-01-26T13:20:37.000Z | [
"pytorch",
"roberta",
"text-classification",
"hu",
"transformers",
"license:gpl"
] | text-classification | false | NYTK | null | NYTK/sentiment-hts2-xlm-roberta-hungarian | 19 | null | transformers | 8,520 | ---
language:
- hu
tags:
- text-classification
license: gpl
metrics:
- accuracy
widget:
- text: "Jó reggelt! majd küldöm az élményhozókat :)."
---
# Hungarian Sentence-level Sentiment Analysis model with XLM-RoBERTa
For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained model used: XLM-RoBERTa base
- Finetuned on Hungarian Twitter Sentiment (HTS) Corpus
- Labels: 1, 2
## Limitations
- max_seq_length = 128
## Results
| Model | HTS2 | HTS5 |
| ------------- | ------------- | ------------- |
| huBERT | 85.55 | 68.99 |
| XLM-RoBERTa| **85.56** | 85.56 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods},
booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)},
year = {2021},
publisher = {IEEE},
address = {Online},
author = {{Laki, László and Yang, Zijian Győző}}
pages = {417--422}
}
``` |
Nehc/gpt2_lovecraft_ru | 28364a75e604ebfcebdc7d6aa595b0a476c96262 | 2021-10-27T11:30:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers"
] | text-generation | false | Nehc | null | Nehc/gpt2_lovecraft_ru | 19 | 1 | transformers | 8,521 | ---
language:
- ru
widget:
- text: "Немыслимо, "
metrics:
- loss: 3.3
- perplexity: 25.7528
---
Start from sberbank-ai/rugpt3small_based_on_gpt2 and finetuning on Govard Phillips Lovecraft texts (russian).
On this moment - only 1 epoch (perplexity falls reasons)
on progress...
|
Nehc/gpt2_priest_ru | cb134d4f4c652f0ec2f36b1dada8c8acbba5b364 | 2022-06-20T18:13:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers"
] | text-generation | false | Nehc | null | Nehc/gpt2_priest_ru | 19 | null | transformers | 8,522 | ---
language:
- ru
widget:
- text: "Бог, это "
metrics:
- loss: 3.3
- perplexity: 25.7528
---
Start from sberbank-ai/rugpt3small_based_on_gpt2 and finetuning on Biblie & preaching (russian).
On this moment - only 1 epoch, 1650 seq length
on progress... |
Shahm/bert-german | 688e0f9406e51bf801cc3aef317c74b8d2874ac9 | 2021-12-21T12:18:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"dataset:mlsum",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Shahm | null | Shahm/bert-german | 19 | null | transformers | 8,523 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- mlsum
model-index:
- name: plus-bert-german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plus-bert-german
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Smone55/autonlp-au_topics-452311620 | 2fe19dd4076459eaf5d0260086b54233229da2bb | 2021-12-28T01:56:22.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Smone55/autonlp-data-au_topics",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | Smone55 | null | Smone55/autonlp-au_topics-452311620 | 19 | null | transformers | 8,524 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Smone55/autonlp-data-au_topics
co2_eq_emissions: 208.0823957145878
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 452311620
- CO2 Emissions (in grams): 208.0823957145878
## Validation Metrics
- Loss: 0.5259971022605896
- Accuracy: 0.8767479025169796
- Macro F1: 0.8618813750734912
- Micro F1: 0.8767479025169796
- Weighted F1: 0.8742964006840133
- Macro Precision: 0.8627700506991158
- Micro Precision: 0.8767479025169796
- Weighted Precision: 0.8755603985289852
- Macro Recall: 0.8662183006750934
- Micro Recall: 0.8767479025169796
- Weighted Recall: 0.8767479025169796
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Smone55/autonlp-au_topics-452311620
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Smone55/autonlp-au_topics-452311620", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Smone55/autonlp-au_topics-452311620", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Tahsin/BERT-finetuned-conll2003-POS | 6a560e8c993723738099bcc52c86eb12c059a5da | 2022-01-05T21:04:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Tahsin | null | Tahsin/BERT-finetuned-conll2003-POS | 19 | null | transformers | 8,525 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-pos
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9276736387541917
- name: Recall
type: recall
value: 0.9329402916272412
- name: F1
type: f1
value: 0.9302995112982049
- name: Accuracy
type: accuracy
value: 0.933154765408842
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-pos
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Precision: 0.9277
- Recall: 0.9329
- F1: 0.9303
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2791 | 1.0 | 1756 | 0.3125 | 0.9212 | 0.9263 | 0.9237 | 0.9272 |
| 0.1853 | 2.0 | 3512 | 0.3038 | 0.9241 | 0.9309 | 0.9275 | 0.9307 |
| 0.1501 | 3.0 | 5268 | 0.3009 | 0.9277 | 0.9329 | 0.9303 | 0.9332 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Yehor/wav2vec2-xls-r-300m-uk-with-lm | 1cb4e3d5bc12e65deb8e9f0d38a6266b581048dc | 2022-07-30T07:01:36.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"uk",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-300m-uk-with-lm | 19 | 3 | transformers | 8,526 | ---
language:
- uk
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- uk
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xls-r-300m-uk-with-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: uk
metrics:
- name: Test WER
type: wer
value: 26.47
- name: Test CER
type: cer
value: 2.90
---
# Ukrainian STT model (with Language Model)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
- Have a look on an updated 300m model: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm
- Have a look on a better model with more parameters: https://huggingface.co/Yehor/wav2vec2-xls-r-1b-uk-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3015
- Wer: 0.3377
- Cer: 0.0708
The above results present evaluation without the language model.
## Model description
On 100 test example the model shows the following results:
Without LM:
- WER: 0.2647
- CER: 0.0469
With LM:
- WER: 0.1568
- CER: 0.0289
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.0255 | 7.93 | 500 | 2.5514 | 0.9921 | 0.9047 |
| 1.3809 | 15.86 | 1000 | 0.4065 | 0.5361 | 0.1201 |
| 1.2355 | 23.8 | 1500 | 0.3474 | 0.4618 | 0.1033 |
| 1.1956 | 31.74 | 2000 | 0.3617 | 0.4580 | 0.1005 |
| 1.1416 | 39.67 | 2500 | 0.3182 | 0.4074 | 0.0891 |
| 1.0996 | 47.61 | 3000 | 0.3166 | 0.3985 | 0.0875 |
| 1.0427 | 55.55 | 3500 | 0.3116 | 0.3835 | 0.0828 |
| 0.9961 | 63.49 | 4000 | 0.3137 | 0.3757 | 0.0807 |
| 0.9575 | 71.42 | 4500 | 0.2992 | 0.3632 | 0.0771 |
| 0.9154 | 79.36 | 5000 | 0.3015 | 0.3502 | 0.0740 |
| 0.8994 | 87.3 | 5500 | 0.3004 | 0.3425 | 0.0723 |
| 0.871 | 95.24 | 6000 | 0.3016 | 0.3394 | 0.0713 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
aditi2222/t5-paraphrase | 378e0760e04a8361ea3cf68314f5bc73c083ef5f | 2021-11-28T07:35:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | aditi2222 | null | aditi2222/t5-paraphrase | 19 | null | transformers | 8,527 | T5 model
This is a sentence-transformers mode |
airKlizz/mt5-base-wikinewssum-italian | fc245cdb64505430b8a898fb274b8461d71845f4 | 2021-12-29T10:55:47.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-italian | 19 | null | transformers | 8,528 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-italian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-italian
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.5739
- Rouge1: 2.1728
- Rouge2: 0.1516
- Rougel: 2.0846
- Rougelsum: 2.0515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 8 | 16.6193 | 2.4011 | 0.3829 | 2.1505 | 2.2161 |
| No log | 2.0 | 16 | 15.8909 | 2.5165 | 0.2799 | 2.3403 | 2.3523 |
| No log | 3.0 | 24 | 15.4843 | 2.2794 | 0.2252 | 2.1849 | 2.1382 |
| 17.2559 | 4.0 | 32 | 13.0850 | 2.2448 | 0.1516 | 2.1426 | 2.0859 |
| 17.2559 | 5.0 | 40 | 11.7838 | 2.2448 | 0.1516 | 2.1426 | 2.0859 |
| 17.2559 | 6.0 | 48 | 11.3207 | 2.2424 | 0.1516 | 2.1423 | 2.1171 |
| 17.2559 | 7.0 | 56 | 10.7871 | 2.1081 | 0.1516 | 2.0227 | 1.9838 |
| 14.6026 | 8.0 | 64 | 10.5739 | 2.1728 | 0.1516 | 2.0846 | 2.0515 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
airKlizz/mt5-base-wikinewssum-polish | 910dd53cd7227da0c3bb03087b0686dbe0e9eacb | 2021-12-27T00:24:41.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-polish | 19 | null | transformers | 8,529 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-polish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-polish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3179
- Rouge1: 7.911
- Rouge2: 3.2189
- Rougel: 6.7856
- Rougelsum: 7.4485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 315 | 2.5391 | 5.9874 | 2.3594 | 5.1303 | 5.6116 |
| No log | 2.0 | 630 | 2.4446 | 7.7294 | 3.0152 | 6.6024 | 7.2757 |
| No log | 3.0 | 945 | 2.3912 | 7.6451 | 2.9785 | 6.5714 | 7.2011 |
| 3.5311 | 4.0 | 1260 | 2.3720 | 7.8007 | 3.0913 | 6.7067 | 7.3451 |
| 3.5311 | 5.0 | 1575 | 2.3411 | 7.8374 | 3.1208 | 6.7288 | 7.3459 |
| 3.5311 | 6.0 | 1890 | 2.3354 | 7.8664 | 3.1655 | 6.762 | 7.4364 |
| 3.5311 | 7.0 | 2205 | 2.3175 | 7.9529 | 3.2225 | 6.8438 | 7.4904 |
| 2.692 | 8.0 | 2520 | 2.3179 | 7.911 | 3.2189 | 6.7856 | 7.4485 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
algoprog/mimics-query-facet-encoder-mpnet-base | d818d848bf14777655687dc8dedfa522e4df78b5 | 2022-02-24T02:03:36.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"transformers"
] | feature-extraction | false | algoprog | null | algoprog/mimics-query-facet-encoder-mpnet-base | 19 | null | transformers | 8,530 | Entry not found |
aliosm/ComVE-gpt2 | 488b7b14eeb44ddcce8098356c698dc89b928da9 | 2021-05-21T13:19:25.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:ComVE",
"transformers",
"exbert",
"commonsense",
"semeval2020",
"comve",
"license:mit"
] | text-generation | false | aliosm | null | aliosm/ComVE-gpt2 | 19 | null | transformers | 8,531 | ---
language: "en"
tags:
- exbert
- commonsense
- semeval2020
- comve
license: "mit"
datasets:
- ComVE
metrics:
- bleu
widget:
- text: "Chicken can swim in water. <|continue|>"
---
# ComVE-gpt2
## Model description
Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective.
The model is able to generate a reason why a given natural language statement is against commonsense.
## Intended uses & limitations
You can use the raw model for text generation to generate reasons why natural language statements are against commonsense.
#### How to use
You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script.
*Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again.
#### Limitations and bias
The model biased to negate the entered sentence usually instead of producing a factual reason.
## Training data
The model is initialized from the [gpt2](https://github.com/huggingface/transformers/blob/master/model_cards/gpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons.
## Training procedure
Each natural language statement that against commonsense is concatenated with its reference reason with `<|continue|>` as a separator, then the model finetuned using CLM objective.
The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 5 epochs, 128 maximum sequence length and 64 batch size.
<center>
<img src="https://i.imgur.com/xKbrwBC.png">
</center>
## Eval results
The model achieved 14.0547/13.6534 BLEU scores on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset.
### BibTeX entry and citation info
```bibtex
@article{fadel2020justers,
title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation},
author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik},
year={2020}
}
```
<a href="https://huggingface.co/exbert/?model=aliosm/ComVE-gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
allenai/dsp_roberta_base_dapt_biomed_tapt_chemprot_4169 | 38a508ccc10ecf87b96e4daa0e14dcbb9aacf642 | 2021-05-20T13:04:19.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | allenai | null | allenai/dsp_roberta_base_dapt_biomed_tapt_chemprot_4169 | 19 | null | transformers | 8,532 | Entry not found |
amazon-sagemaker-community/encoder_decoder_es | ec64a48d1cdca70ba4cee82bd39873b73caf1fe6 | 2021-11-20T05:44:01.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"dataset:cc_news_es_titles",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | amazon-sagemaker-community | null | amazon-sagemaker-community/encoder_decoder_es | 19 | null | transformers | 8,533 | ---
tags:
- generated_from_trainer
datasets:
- cc_news_es_titles
model-index:
- name: encoder_decoder_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# encoder_decoder_es
This model is a fine-tuned version of [](https://huggingface.co/) on the cc_news_es_titles dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8773
- Rouge2 Precision: 0.002
- Rouge2 Recall: 0.0116
- Rouge2 Fmeasure: 0.0034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 7.8807 | 1.0 | 5784 | 7.8976 | 0.0023 | 0.012 | 0.0038 |
| 7.8771 | 2.0 | 11568 | 7.8873 | 0.0018 | 0.0099 | 0.003 |
| 7.8588 | 3.0 | 17352 | 7.8819 | 0.0015 | 0.0085 | 0.0025 |
| 7.8507 | 4.0 | 23136 | 7.8773 | 0.002 | 0.0116 | 0.0034 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
andi611/bert-base-cased-ner-conll2003 | 6eabad03cbfe119d6ad72ef45fb38dd4f419718a | 2021-07-03T15:02:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | andi611 | null | andi611/bert-base-cased-ner-conll2003 | 19 | null | transformers | 8,534 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bert-base-cased-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9860628716077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9406
- Recall: 0.9463
- F1: 0.9434
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5855 | 1.0 | 878 | 0.0848 | 0.8965 | 0.8980 | 0.8973 | 0.9760 |
| 0.058 | 2.0 | 1756 | 0.0607 | 0.9357 | 0.9379 | 0.9368 | 0.9840 |
| 0.0282 | 3.0 | 2634 | 0.0604 | 0.9354 | 0.9420 | 0.9387 | 0.9852 |
| 0.0148 | 4.0 | 3512 | 0.0606 | 0.9386 | 0.9485 | 0.9435 | 0.9861 |
| 0.0101 | 5.0 | 4390 | 0.0620 | 0.9406 | 0.9463 | 0.9434 | 0.9861 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
anirudh21/albert-large-v2-finetuned-rte | 584617ae506f0d620b7393cb7eab4b7961663bf6 | 2022-01-27T18:29:58.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | anirudh21 | null | anirudh21/albert-large-v2-finetuned-rte | 19 | null | transformers | 8,535 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-large-v2-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5487364620938628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-finetuned-rte
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6827
- Accuracy: 0.5487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 18 | 0.6954 | 0.5271 |
| No log | 2.0 | 36 | 0.6860 | 0.5379 |
| No log | 3.0 | 54 | 0.6827 | 0.5487 |
| No log | 4.0 | 72 | 0.7179 | 0.5235 |
| No log | 5.0 | 90 | 0.7504 | 0.5379 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
anton-l/wav2vec2-large-xlsr-53-lithuanian | d3bb59b7d33cda19411f924baa399994bc1a2aa9 | 2021-07-05T20:06:38.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-lithuanian | 19 | null | transformers | 8,536 | ---
language: lt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Lithuanian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lt
type: common_voice
args: lt
metrics:
- name: Test WER
type: wer
value: 49.00
---
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-lithuanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/lt.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-lithuanian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/lt/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/lt/clips/"
def clean_sentence(sent):
sent = sent.lower()
# normalize apostrophes
sent = sent.replace("’", "'")
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() or ch == "'" else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 49.00 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
appleternity/scibert-uncased-finetuned-coda19 | df32a287d131248505494244dd35ba2354984751 | 2021-05-19T00:01:52.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | appleternity | null | appleternity/scibert-uncased-finetuned-coda19 | 19 | null | transformers | 8,537 | Entry not found |
tner/xlm-roberta-large-panx-dataset-ja | b902fe0d6f1293e0a656eea6348e31d0b27cbc91 | 2021-02-13T00:11:28.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-panx-dataset-ja | 19 | null | transformers | 8,538 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ja")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ja")
``` |
bertin-project/bertin-base-xnli-es | 8b7c57c0e25e18a04411a98083924b07609779bd | 2021-09-23T13:42:09.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"transformers",
"spanish",
"xnli",
"license:cc-by-4.0"
] | text-classification | false | bertin-project | null | bertin-project/bertin-base-xnli-es | 19 | 1 | transformers | 8,539 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
- xnli
---
This checkpoint has been trained for the XNLI dataset.
This checkpoint was created from **Bertin Gaussian 512**, which is a **RoBERTa-base** model trained from scratch in Spanish. Information on this base model may be found at [its own card](https://huggingface.co/bertin-project/bertin-base-gaussian-exp-512seqlen) and at deeper detail on [the main project card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish).
The training dataset for the base model is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (using a Gaussian function), discarding more often documents with very large values (poor quality) of very small values (short, repetitive texts).
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team members
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Paulo Villegas ([paulo](https://huggingface.co/paulo)) |
bettertextapp/bart_large_paraphrase_generator_en_de_v2 | eb1b263b3a60f45b73af239d002faba8f918fc00 | 2022-02-21T21:11:51.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bettertextapp | null | bettertextapp/bart_large_paraphrase_generator_en_de_v2 | 19 | null | transformers | 8,540 | ---
tags:
- generated_from_trainer
model-index:
- name: bart_large_paraphrase_generator_en_de_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_large_paraphrase_generator_en_de_v2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
{'eval_loss': 0.9200083613395691, 'eval_score': 49.97448884411352, 'eval_counts': [100712, 72963, 57055, 41578], 'eval_totals': [133837, 130839, 127841, 124843], 'eval_precisions': [75.24974409169363, 55.76548276889918, 44.6296571522438, 33.30423011302196], 'eval_bp': 1.0, 'eval_sys_len': 133837, 'eval_ref_len': 130883, 'eval_runtime': 138.6871, 'eval_samples_per_second': 21.617, 'eval_steps_per_second': 0.678}
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
beyhan/bert-base-turkish-ner-cased-pretrained | a463a0cb156e2c384f851a4667a559bb701e9070 | 2021-05-19T12:37:40.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | beyhan | null | beyhan/bert-base-turkish-ner-cased-pretrained | 19 | null | transformers | 8,541 | Entry not found |
boychaboy/MNLI_albert-base-v2 | 116b85250bbdbd945ab7bd486a252f05c84617e5 | 2021-05-14T01:54:43.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/MNLI_albert-base-v2 | 19 | null | transformers | 8,542 | Entry not found |
celtics1863/env-bert-cls-chinese | 24dc5f6707bf2fe3b33005949a89c7058a775bec | 2021-10-30T09:27:10.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"transformers",
"environment",
"multi-class",
"classification"
] | text-classification | false | celtics1863 | null | celtics1863/env-bert-cls-chinese | 19 | null | transformers | 8,543 | ---
language:
- zh
tags:
- bert
- pytorch
- environment
- multi-class
- classification
---
中文环境文本分类模型,1.6M的数据集,在env-bert-chinese上进行fine-tuning。
分为环境影响评价与控制、碳排放控制、水污染控制、大气污染控制、土壤污染控制、环境生态、固体废物、环境毒理与健康、环境微生物、环境政策与经济10类。
项目正在进行中,后续会陆续更新相关内容。
清华大学环境学院课题组
有相关需求、建议,联系[email protected] |
chitra/finetune-paraphrase-model | ed3e3bf7811f33bdf5237013d235483f924fe34c | 2022-01-19T04:40:57.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | chitra | null | chitra/finetune-paraphrase-model | 19 | null | transformers | 8,544 | ---
tags:
- generated_from_trainer
model-index:
- name: finetune-paraphrase-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-paraphrase-model
This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.1 | 200 | 3.0116 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
chujiezheng/blenderbot-400M-distill-ESC | ac35a26ae42087e7f0ccc5cdcc97d8cda6fa4b69 | 2022-05-22T23:44:57.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"arxiv:2106.01144",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | chujiezheng | null | chujiezheng/blenderbot-400M-distill-ESC | 19 | 1 | transformers | 8,545 | [blenderbot-400M-distill](https://huggingface.co/facebook/blenderbot-400M-distill) fine-tuned on [Emotional Support Conversation](https://arxiv.org/pdf/2106.01144.pdf) dataset |
danyaljj/opengpt2_pytorch_backward | 4c14fe78590bfc5f4358cc7c29a6ee8b63a6b96a | 2021-06-16T20:29:52.000Z | [
"pytorch",
"transformers"
] | null | false | danyaljj | null | danyaljj/opengpt2_pytorch_backward | 19 | null | transformers | 8,546 | West et al.'s model from their "reflective decoding" paper.
Sample usage:
```python
import torch
from modeling_opengpt2 import OpenGPT2LMHeadModel
from padded_encoder import Encoder
path_to_backward = 'danyaljj/opengpt2_pytorch_backward'
encoder = Encoder()
model_backward = OpenGPT2LMHeadModel.from_pretrained(path_to_backward)
input = "until she finally won."
input_ids = encoder.encode(input)
input_ids = torch.tensor([input_ids[::-1] ], dtype=torch.int)
print(input_ids)
output = model_backward.generate(input_ids)
output_text = encoder.decode(output.tolist()[0][::-1])
print(output_text)
```
Download the additional files from here: https://github.com/peterwestuw/GPT2ForwardBackward
|
dpalominop/bert-large-cased-finetuned-ner | d5315d714523389b206e30fdbc9457a131bd6aba | 2021-05-19T16:06:38.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | dpalominop | null | dpalominop/bert-large-cased-finetuned-ner | 19 | null | transformers | 8,547 | Entry not found |
edugp/data2vec-nlp-base | 07514a15d71f8cb624fd36aa22300061e27c9677 | 2022-02-03T23:23:15.000Z | [
"pytorch",
"data2vec",
"fill-mask",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | edugp | null | edugp/data2vec-nlp-base | 19 | null | transformers | 8,548 | ---
license: apache-2.0
tags:
model-index:
- name: data2vec-nlp-base
results: []
---
# Data2Vec NLP Base
This model was converted from `fairseq`.
The original weights can be found in https://dl.fbaipublicfiles.com/fairseq/data2vec/nlp_base.pt
Example usage:
```python
from transformers import RobertaTokenizer, Data2VecForSequenceClassification, Data2VecConfig
import torch
tokenizer = RobertaTokenizer.from_pretrained("roberta-large")
config = Data2VecConfig.from_pretrained("edugp/data2vec-nlp-base")
model = Data2VecForSequenceClassification.from_pretrained("edugp/data2vec-nlp-base", config=config)
# Fine-tune this model
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
|
fav-kky/FERNET-News | dd5d3ec15f0ab34b9bbf1c8f9f67447524b3d362 | 2021-07-26T21:05:10.000Z | [
"pytorch",
"tf",
"roberta",
"fill-mask",
"cs",
"arxiv:2107.10042",
"transformers",
"Czech",
"KKY",
"FAV",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | fav-kky | null | fav-kky/FERNET-News | 19 | null | transformers | 8,549 | ---
language: "cs"
tags:
- Czech
- KKY
- FAV
license: "cc-by-nc-sa-4.0"
---
# FERNET-News
FERNET-News is a monolingual Czech RoBERTa-base model pre-trained from 20.5GB of thoroughly cleaned Czech news corpus.
Preprint of our paper is available at https://arxiv.org/abs/2107.10042. |
gchhablani/bert-base-cased-finetuned-rte | 7e5be8e895f03887545da0172f91beffa92442c1 | 2021-09-20T09:08:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:glue",
"arxiv:2105.03824",
"transformers",
"generated_from_trainer",
"fnet-bert-base-comparison",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/bert-base-cased-finetuned-rte | 19 | null | transformers | 8,550 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6714801444043321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-rte
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7260
- Accuracy: 0.6715
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name rte \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-rte \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6915 | 1.0 | 156 | 0.6491 | 0.6606 |
| 0.55 | 2.0 | 312 | 0.6737 | 0.6570 |
| 0.3955 | 3.0 | 468 | 0.7260 | 0.6715 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
giganticode/StackOBERTflow-comments-small-v1 | fab1947828858dd1ac1a69cb422b47a9444c7500 | 2021-05-20T16:33:56.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | giganticode | null | giganticode/StackOBERTflow-comments-small-v1 | 19 | null | transformers | 8,551 | # StackOBERTflow-comments-small
StackOBERTflow is a RoBERTa model trained on StackOverflow comments.
A Byte-level BPE tokenizer with dropout was used (using the `tokenizers` package).
The model is *small*, i.e. has only 6-layers and the maximum sequence length was restricted to 256 tokens.
The model was trained for 6 epochs on several GBs of comments from the StackOverflow corpus.
## Quick start: masked language modeling prediction
```python
from transformers import pipeline
from pprint import pprint
COMMENT = "You really should not do it this way, I would use <mask> instead."
fill_mask = pipeline(
"fill-mask",
model="giganticode/StackOBERTflow-comments-small-v1",
tokenizer="giganticode/StackOBERTflow-comments-small-v1"
)
pprint(fill_mask(COMMENT))
# [{'score': 0.019997311756014824,
# 'sequence': '<s> You really should not do it this way, I would use jQuery instead.</s>',
# 'token': 1738},
# {'score': 0.01693696901202202,
# 'sequence': '<s> You really should not do it this way, I would use arrays instead.</s>',
# 'token': 2844},
# {'score': 0.013411642983555794,
# 'sequence': '<s> You really should not do it this way, I would use CSS instead.</s>',
# 'token': 2254},
# {'score': 0.013224546797573566,
# 'sequence': '<s> You really should not do it this way, I would use it instead.</s>',
# 'token': 300},
# {'score': 0.011984303593635559,
# 'sequence': '<s> You really should not do it this way, I would use classes instead.</s>',
# 'token': 1779}]
```
|
google/tapas-medium-finetuned-tabfact | d75f8445e8df10f8b3bc6dd54d819acadecd9551 | 2021-11-29T13:09:54.000Z | [
"pytorch",
"tf",
"tapas",
"text-classification",
"en",
"dataset:tab_fact",
"arxiv:2010.00571",
"arxiv:2004.02349",
"transformers",
"sequence-classification",
"license:apache-2.0"
] | text-classification | false | google | null | google/tapas-medium-finetuned-tabfact | 19 | null | transformers | 8,552 | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS medium model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_medium`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` |
hakurei/lit-6B-8bit | e2e9d5beafb3dddd58409d9b6288cec36bad6673 | 2022-02-19T01:30:48.000Z | [
"pytorch",
"en",
"causal-lm",
"license:mit"
] | null | false | hakurei | null | hakurei/lit-6B-8bit | 19 | 2 | null | 8,553 | ---
language:
- en
tags:
- pytorch
- causal-lm
license: mit
---
# Lit-6B - A Large Fine-tuned Model For Fictional Storytelling
Lit-6B is a GPT-J 6B model fine-tuned on 2GB of a diverse range of light novels, erotica, and annotated literature for the purpose of generating novel-like fictional text.
## Model Description
The model used for fine-tuning is [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax), which is a 6 billion parameter auto-regressive language model trained on [The Pile](https://pile.eleuther.ai/).
## Training Data & Annotative Prompting
The data used in fine-tuning has been gathered from various sources such as the [Gutenberg Project](https://www.gutenberg.org/). The annotated fiction dataset has prepended tags to assist in generating towards a particular style. Here is an example prompt that shows how to use the annotations.
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror; Tags: 3rdperson, scary; Style: Dark ]
***
When a traveler in north central Massachusetts takes the wrong fork...
```
The annotations can be mixed and matched to help generate towards a specific style.
## Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers.
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('hakurei/lit-6B')
tokenizer = AutoTokenizer.from_pretrained('hakurei/lit-6B')
prompt = '''[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
An example output from this code produces a result that will look similar to:
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler comes to an unknown region, his thoughts turn inevitably towards the old gods and legends which cluster around its appearance. It is not that he believes in them or suspects their reality—but merely because they are present somewhere else in creation just as truly as himself, and so belong of necessity in any landscape whose features cannot be altogether strange to him. Moreover, man has been prone from ancient times to brood over those things most connected with the places where he dwells. Thus the Olympian deities who ruled Hyper
```
## Team members and Acknowledgements
This project would not have been possible without the computational resources graciously provided by the [TPU Research Cloud](https://sites.research.google/trc/)
- [Anthony Mercurio](https://github.com/harubaru)
- Imperishable_NEET |
hectorcotelo/autonlp-spanish_songs-202661 | c10b626c922ee2610ea41ec314440ddd45af4273 | 2021-05-19T11:38:11.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"dataset:hectorcotelo/autonlp-data-spanish_songs",
"transformers",
"autonlp"
] | text-classification | false | hectorcotelo | null | hectorcotelo/autonlp-spanish_songs-202661 | 19 | null | transformers | 8,554 | ---
tags: autonlp
language: es
widget:
- text: "Y si me tomo una cerveza
Vuelves a mi cabeza
Y empiezo a recordarte
Es que me gusta cómo besas
Con tu delicadeza
Puede ser que
Tú y yo, somos el uno para el otro
Que no dejo de pensarte
Quise olvidarte y tomé un poco
Y resultó extrañarte, yeah"
datasets:
- hectorcotelo/autonlp-data-spanish_songs
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 202661
## Validation Metrics
- Loss: 1.5369086265563965
- Accuracy: 0.30762817840766987
- Macro F1: 0.28034259092597485
- Micro F1: 0.30762817840766987
- Weighted F1: 0.28072818168048186
- Macro Precision: 0.3113843896292072
- Micro Precision: 0.30762817840766987
- Weighted Precision: 0.3128459166476807
- Macro Recall: 0.3071652685939504
- Micro Recall: 0.30762817840766987
- Weighted Recall: 0.30762817840766987
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/hectorcotelo/autonlp-spanish_songs-202661
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hectorcotelo/autonlp-spanish_songs-202661", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hectorcotelo/autonlp-spanish_songs-202661", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
howey/electra-base-mnli | 814f68846e1d987803f51bfe76eb1bfb4e27416e | 2022-03-08T18:08:21.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | howey | null | howey/electra-base-mnli | 19 | null | transformers | 8,555 | Entry not found |
huggingartists/drake | 940b328a923569df57a4c843de83674c9b88bc9c | 2022-07-07T14:26:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/drake",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/drake | 19 | null | transformers | 8,556 | ---
language: en
datasets:
- huggingartists/drake
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/631b206379b60df5e1da90e84d35fdbe.1000x1000x1.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Drake</div>
<a href="https://genius.com/artists/drake">
<div style="text-align: center; font-size: 14px;">@drake</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Drake.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/drake).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/drake")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2e42ok17/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Drake's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2xe72oq3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2xe72oq3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/drake')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/drake")
model = AutoModelWithLMHead.from_pretrained("huggingartists/drake")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/queen | 7f2c5a89d3c5e793e875bbe4b5eca67d0f64a5c1 | 2022-07-13T06:52:09.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/queen",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/queen | 19 | null | transformers | 8,557 | ---
language: en
datasets:
- huggingartists/queen
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/97bcb5755cb9780d76b37726a0ce4bef.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Queen</div>
<a href="https://genius.com/artists/queen">
<div style="text-align: center; font-size: 14px;">@queen</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Queen.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/queen).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/queen")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1jdprwq2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Queen's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2lvkoamo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2lvkoamo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/queen')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/queen")
model = AutoModelWithLMHead.from_pretrained("huggingartists/queen")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/alvarouribevel | 3bec5573ba32597f5dc23e14384f0fef6af999b8 | 2021-06-11T16:26:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/alvarouribevel | 19 | null | transformers | 8,558 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/479052171837984768/mlO43FWa_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Álvaro Uribe Vélez</div>
<div style="text-align: center; font-size: 14px;">@alvarouribevel</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Álvaro Uribe Vélez.
| Data | Álvaro Uribe Vélez |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 1335 |
| Short tweets | 228 |
| Tweets kept | 1677 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1439yxv6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alvarouribevel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ly70v6r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ly70v6r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alvarouribevel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cavidaga-elonmusk | 090a6b9de6773578c6a74da254d20de8df0531e5 | 2021-07-31T08:35:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cavidaga-elonmusk | 19 | null | transformers | 8,559 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1416443682157473795/dGtFbtht_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1420013003483852810/Rsl-fb7i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Cavid Ağa</div>
<div style="text-align: center; font-size: 14px;">@cavidaga-elonmusk</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Cavid Ağa.
| Data | Elon Musk | Cavid Ağa |
| --- | --- | --- |
| Tweets downloaded | 830 | 3221 |
| Retweets | 48 | 483 |
| Short tweets | 237 | 263 |
| Tweets kept | 545 | 2475 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ydwi0ay/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cavidaga-elonmusk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mxx9rsu8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mxx9rsu8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cavidaga-elonmusk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/deontologistics | 444df11ff2855d856bbf162cbee351b154506454 | 2021-05-22T01:22:08.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/deontologistics | 19 | null | transformers | 8,560 | ---
language: en
thumbnail: https://www.huggingtweets.com/deontologistics/1616689045190/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1357656503566622720/PGCAnBgE_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">pete wolfendale 🤖 AI Bot </div>
<div style="font-size: 15px">@deontologistics bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deontologistics's tweets](https://twitter.com/deontologistics).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 590 |
| Short tweets | 187 |
| Tweets kept | 2453 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ahwv4uv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deontologistics's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dpgq6x6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dpgq6x6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deontologistics')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/fesshole | 6fb16c8dda3c55c000bd2516189201d4fd8c51ec | 2022-07-07T10:39:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/fesshole | 19 | null | transformers | 8,561 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1172580448662372353/SwJNqDQl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Fesshole 🧻</div>
<div style="text-align: center; font-size: 14px;">@fesshole</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Fesshole 🧻.
| Data | Fesshole 🧻 |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 14 |
| Short tweets | 1 |
| Tweets kept | 3235 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3473th10/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fesshole's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wz2ncbz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wz2ncbz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fesshole')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/leehsienloong | 8c52c7843baeb3d6629f34fe311b087454e83b1a | 2021-05-22T11:47:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/leehsienloong | 19 | null | transformers | 8,562 | ---
language: en
thumbnail: https://www.huggingtweets.com/leehsienloong/1602584946584/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1292656123422498817/KsNLC4Uc_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">leehsienloong 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@leehsienloong bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@leehsienloong's tweets](https://twitter.com/leehsienloong).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3195</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>36</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>39</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3120</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/bodl1o36/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @leehsienloong's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/7ajjl7j0) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/7ajjl7j0/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/leehsienloong'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/seocamp | abd95279c1d95ba9f19805124691fa1c794bc4dc | 2021-05-22T22:29:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/seocamp | 19 | null | transformers | 8,563 | ---
language: en
thumbnail: https://www.huggingtweets.com/seocamp/1600856567422/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/557135313558970369/0rA33HGL_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">SEO Camp 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@seocamp bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@seocamp's tweets](https://twitter.com/seocamp).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3238</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>849</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>53</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2336</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2g3bq1ht/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @seocamp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2725jswm) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2725jswm/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/seocamp'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/tweeting691 | 10eb60257e4dd91759097012fdf22dc8ada2ac24 | 2021-05-23T03:02:02.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/tweeting691 | 19 | null | transformers | 8,564 | ---
language: en
thumbnail: https://www.huggingtweets.com/tweeting691/1609406697752/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1344038435204562951/gw-6-9w9_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">dr. jesus 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@tweeting691 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@tweeting691's tweets](https://twitter.com/tweeting691).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>185</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>23</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>161</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3a553tjb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tweeting691's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15gnpyl6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15gnpyl6/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/tweeting691'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/twomad | 1e8b9194867ac12fd3bada03554a338cad617e40 | 2021-05-23T03:07:13.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/twomad | 19 | null | transformers | 8,565 | ---
language: en
thumbnail: https://www.huggingtweets.com/twomad/1618363135274/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1375541353564700672/Ocxb3A5u_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">twomad⁉️ 🤖 AI Bot </div>
<div style="font-size: 15px">@twomad bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@twomad's tweets](https://twitter.com/twomad).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 39 |
| Short tweets | 1769 |
| Tweets kept | 1441 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mxyoi4m2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @twomad's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rwdxqqe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rwdxqqe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/twomad')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jcblaise/electra-tagalog-small-uncased-discriminator-newsphnli | f992b7265be1297a03f9c6f81c9e00d8bb6c85bb | 2020-12-08T10:24:28.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
] | text-classification | false | jcblaise | null | jcblaise/electra-tagalog-small-uncased-discriminator-newsphnli | 19 | null | transformers | 8,566 | Entry not found |
joelito/bert-base-uncased-sem_eval_2010_task_8 | 1bc05a5cc0032845237919a2b85332917eb2260c | 2021-05-19T20:50:51.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | joelito | null | joelito/bert-base-uncased-sem_eval_2010_task_8 | 19 | null | transformers | 8,567 | # bert-base-uncased-sem_eval_2010_task_8
Task: sem_eval_2010_task_8
Base Model: bert-base-uncased
Trained for 3 epochs
Batch-size: 6
Seed: 42
Test F1-Score: 0.8 |
kwang2049/TSDAE-askubuntu2nli_stsb | cd3499a85bb9242fb995726ed8d00988b51d1c81 | 2021-10-25T16:13:34.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"transformers"
] | feature-extraction | false | kwang2049 | null | kwang2049/TSDAE-askubuntu2nli_stsb | 19 | null | transformers | 8,568 | # kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
laboro-ai/distilbert-base-japanese-finetuned-livedoor | 1918b65a2cd7e007ebe156fb5374d80d381d085a | 2020-12-18T03:09:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"ja",
"transformers",
"license:cc-by-nc-4.0"
] | text-classification | false | laboro-ai | null | laboro-ai/distilbert-base-japanese-finetuned-livedoor | 19 | null | transformers | 8,569 | ---
language: ja
tags:
- distilbert
license: cc-by-nc-4.0
---
|
liaad/srl-pt_mbert-base | 8134ee989df4975f4993ca7e627410ddfdb8e791 | 2021-09-22T08:56:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"multilingual",
"pt",
"dataset:PropBank.Br",
"arxiv:2101.01213",
"transformers",
"bert-base-multilingual-cased",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/srl-pt_mbert-base | 19 | null | transformers | 8,570 | ---
language:
- multilingual
- pt
tags:
- bert-base-multilingual-cased
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# mBERT fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_mbert-base")
model = AutoModel.from_pretrained("liaad/srl-pt_mbert-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lucio/xls-r-kyrgiz-cv8 | 9b2212ed46efc01ecf8524f579ec910758e82a1d | 2022-03-23T18:34:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ky",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lucio | null | lucio/xls-r-kyrgiz-cv8 | 19 | null | transformers | 8,571 | ---
language:
- ky
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M Kyrgiz CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ky
metrics:
- name: Test WER (with LM)
type: wer
value: 19.01
- name: Test CER (with LM)
type: cer
value: 5.38
- name: Test WER (no LM)
type: wer
value: 31.28
- name: Test CER (no LM)
type: cer
value: 7.66
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M Kyrgiz CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KY dataset.
It achieves the following results on the validation set:
- Loss: 0.5497
- Wer: 0.2945
- Cer: 0.0791
## Model description
For a description of the model architecture, see [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
The model vocabulary consists of the cyrillic alphabet with punctuation removed.
The kenlm language model is built using the text of the train and invalidated corpus splits.
## Intended uses & limitations
This model is expected to be of some utility for low-fidelity use cases such as:
- Draft video captions
- Indexing of recorded broadcasts
The model is not reliable enough to use as a substitute for live captions for accessibility purposes, and it should not be used in a manner that would infringe the privacy of any of the contributors to the Common Voice dataset nor any other speakers.
## Training and evaluation data
The combination of `train`, `dev` and `other` of common voice official splits were used as training data. The half of the official `test` split was used as validation data, as and the full `test` set was used for final evaluation.
## Training procedure
The featurization layers of the XLS-R model are frozen while tuning a final CTC/LM layer on the Kyrgiz CV8 example sentences. A ramped learning rate is used with an initial warmup phase of 500 steps, a max of 0.0001, and cooling back towards 0 for the remainder of the 8100 steps (300 epochs).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 3.1079 | 18.51 | 500 | 2.6795 | 0.9996 | 0.9825 |
| 0.8506 | 37.04 | 1000 | 0.4323 | 0.3718 | 0.0961 |
| 0.6821 | 55.55 | 1500 | 0.4105 | 0.3311 | 0.0878 |
| 0.6091 | 74.07 | 2000 | 0.4281 | 0.3168 | 0.0851 |
| 0.5429 | 92.58 | 2500 | 0.4525 | 0.3147 | 0.0842 |
| 0.5063 | 111.11 | 3000 | 0.4619 | 0.3144 | 0.0839 |
| 0.4661 | 129.62 | 3500 | 0.4660 | 0.3039 | 0.0818 |
| 0.4353 | 148.15 | 4000 | 0.4695 | 0.3083 | 0.0820 |
| 0.4048 | 166.65 | 4500 | 0.4909 | 0.3085 | 0.0824 |
| 0.3852 | 185.18 | 5000 | 0.5074 | 0.3048 | 0.0812 |
| 0.3567 | 203.69 | 5500 | 0.5111 | 0.3012 | 0.0810 |
| 0.3451 | 222.22 | 6000 | 0.5225 | 0.2982 | 0.0804 |
| 0.325 | 240.73 | 6500 | 0.5270 | 0.2955 | 0.0796 |
| 0.3089 | 259.25 | 7000 | 0.5381 | 0.2929 | 0.0793 |
| 0.2941 | 277.76 | 7500 | 0.5565 | 0.2923 | 0.0794 |
| 0.2945 | 296.29 | 8000 | 0.5495 | 0.2951 | 0.0789 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
luiz826/roberta-to-music-genre | 2e10604ea9c0bfaee4b50467c11f46ebfa7c720e | 2021-12-12T16:36:12.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | luiz826 | null | luiz826/roberta-to-music-genre | 19 | null | transformers | 8,572 | This model was made for a project in the NLP group of the Technology and Artificial Intelligence League (TAIL).
We try to predict a music genre from the lyrics. |
m3hrdadfi/albert-fa-base-v2-sentiment-binary | f257e9f5fce378e4b287173361ef45470ffcbcb8 | 2020-12-26T08:46:58.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | m3hrdadfi | null | m3hrdadfi/albert-fa-base-v2-sentiment-binary | 19 | 1 | transformers | 8,573 | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
## Results
The model obtained an F1 score of 87.56% for a composition of all three datasets into a binary-labels `Negative` and `Positive`.
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
malay-huggingface/bert-tiny-bahasa-cased | 6b30b65ba47d921d7f5716f733ac4211185d4bf1 | 2021-09-11T16:15:36.000Z | [
"pytorch",
"bert",
"fill-mask",
"ms",
"transformers",
"autotrain_compatible"
] | fill-mask | false | malay-huggingface | null | malay-huggingface/bert-tiny-bahasa-cased | 19 | null | transformers | 8,574 | ---
language: ms
---
# bert-tiny-bahasa-cased
Pretrained BERT tiny language model for Malay.
## Pretraining Corpus
`bert-tiny-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/bert](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/bert).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import BertTokenizer, BertModel
model = BertModel.from_pretrained('malay-huggingface/bert-tiny-bahasa-cased')
tokenizer = BertTokenizer.from_pretrained(
'malay-huggingface/bert-tiny-bahasa-cased',
do_lower_case = False,
)
```
## Example using AutoModelWithLMHead
```python
from transformers import BertTokenizer, BertForMaskedLM, pipeline
model = BertForMaskedLM.from_pretrained('malay-huggingface/bert-tiny-bahasa-cased')
tokenizer = BertTokenizer.from_pretrained(
'malay-huggingface/bert-tiny-bahasa-cased',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('Permohonan Najib, anak untuk dengar isu perlembagaan [MASK] .')
```
Output is,
```text
[{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan Malaysia.',
'score': 0.09178723394870758,
'token': 1957,
'token_str': 'M a l a y s i a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan negara.',
'score': 0.053524162620306015,
'token': 2134,
'token_str': 'n e g a r a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan dikemukakan.',
'score': 0.031137527897953987,
'token': 9383,
'token_str': 'd i k e m u k a k a n'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan 1MDB.',
'score': 0.02826082520186901,
'token': 13838,
'token_str': '1 M D B'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan ditolak.',
'score': 0.026568090543150902,
'token': 11465,
'token_str': 'd i t o l a k'}]
```
|
manueldeprada/t5-cord19-paraphrase-paws-msrp-opinosis | 0a5da286d393e31526c58b537c941fc4d6a8fa1e | 2021-06-23T12:34:22.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | manueldeprada | null | manueldeprada/t5-cord19-paraphrase-paws-msrp-opinosis | 19 | null | transformers | 8,575 | # T5-Paraphrase pretrained using the CORD-19 dataset.
The base model is manueldeprada/t5-cord19, which has been pretrained with the text and abstracts from the CORD-19 dataset.
It has been finetuned in paraphrasing text like ceshine/t5-paraphrase-paws-msrp-opinosis, using the scripts from [ceshine/finetuning-t5 Github repo](https://github.com/ceshine/finetuning-t5/tree/master/paraphrase).
It does the same paraphrasing but the CORD-19 pretraining allows this model to perform well in COVID-19 related text. |
mattchurgin/xls-r-eng | 148cffc40d176e85145d5f90a8a65c405f030f01 | 2022-01-23T17:31:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mattchurgin | null | mattchurgin/xls-r-eng | 19 | null | transformers | 8,576 | ---
language:
- ab
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [patrickvonplaten/wav2vec2_tiny_random_robust](https://huggingface.co/patrickvonplaten/wav2vec2_tiny_random_robust) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
midas/gupshup_e2e_t5 | 8a9f367c92827964c12573889b5177e0b00105e5 | 2021-11-14T02:08:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:1910.04073",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | midas | null | midas/gupshup_e2e_t5 | 19 | null | transformers | 8,577 | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
mrm8488/t5-base-finetuned-AESLC-summarization | ac192f3ee2f086d7d87693bae073c9603ff0dd69 | 2021-06-23T12:40:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-AESLC-summarization | 19 | null | transformers | 8,578 | Entry not found |
nbouali/flaubert-base-uncased-finetuned-cooking | c21d936fe58805bd72b49ad5333f4ad79b3890bb | 2021-04-28T16:02:59.000Z | [
"pytorch",
"flaubert",
"text-classification",
"fr",
"transformers",
"french",
"flaubert-base-uncased"
] | text-classification | false | nbouali | null | nbouali/flaubert-base-uncased-finetuned-cooking | 19 | null | transformers | 8,579 | ---
language: fr
tags:
- text-classification
- flaubert
- french
- flaubert-base-uncased
widget:
- text: "Lasagnes à la bolognaise"
---
# FlauBERT finetuned on French cooking recipes
This model is finetuned on a sequence classification task that associates each sequence with the appropriate recipe category.
### How to use it?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TextClassificationPipeline
loaded_tokenizer = AutoTokenizer.from_pretrained("nbouali/flaubert-base-uncased-finetuned-cooking")
loaded_model = AutoModelForSequenceClassification.from_pretrained("nbouali/flaubert-base-uncased-finetuned-cooking")
nlp = TextClassificationPipeline(model=loaded_model,tokenizer=loaded_tokenizer,task="Recipe classification")
print(nlp("Lasagnes à la bolognaise"))
```
```
[{'label': 'LABEL_6', 'score': 0.9921900033950806}]
```
### Label encoding:
| label | Recipe Category |
|:------:|:--------------:|
| 0 |'Accompagnement' |
| 1 | 'Amuse-gueule' |
| 2 | 'Boisson' |
| 3 | 'Confiserie' |
| 4 | 'Dessert'|
| 5 | 'Entrée' |
| 6 |'Plat principal' |
| 7 | 'Sauce' |
<br/>
<br/>
> If you would like to know more about this model you can refer to [our blog post](https://medium.com/unify-data-office/a-cooking-language-model-fine-tuned-on-dozens-of-thousands-of-french-recipes-bcdb8e560571) |
nielsr/codet5-small-code-summarization-ruby | 522d18fcc9e7ecaf9283c3c83637ac423423d591 | 2021-11-07T17:37:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:code_x_glue_ct_code_to_text",
"transformers",
"codet5",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | nielsr | null | nielsr/codet5-small-code-summarization-ruby | 19 | 2 | transformers | 8,580 | ---
license: apache-2.0
tags:
- codet5
datasets:
- code_x_glue_ct_code_to_text
widget:
- text: 'def pad(tensor, paddings, mode: "CONSTANT", name: nil) _op(:pad, tensor, paddings, mode: mode, name: name) end </s>'
---
# Description
CodeT5-small model, fine-tuned on the code summarization subtask of CodeXGLUE (Ruby programming language). This model can generate a docstring of a given function written in Ruby.
# Notebook
The notebook that I used to fine-tune CodeT5 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/T5/Fine_tune_CodeT5_for_generating_docstrings_from_Ruby_code.ipynb).
# Usage
Here's how to use this model:
```python
from transformers import RobertaTokenizer, T5ForConditionalGeneration
model_name = "nielsr/codet5-small-code-summarization-ruby"
tokenizer = RobertaTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
code = """
def update_with_file_contents(digest, filename)
File.open(filename) do |io|
while (chunk = io.read(1024 * 8))
digest.update(chunk)
end
end
end
"""
input_ids = tokenizer(code, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Update the digest with the contents of the given file
``` |
osanseviero/full-sentence-distillroberta2 | 4551c4b36ec3f2057243b92abea3218feec23f4c | 2021-08-06T08:37:57.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | feature-extraction | false | osanseviero | null | osanseviero/full-sentence-distillroberta2 | 19 | null | sentence-transformers | 8,581 | ---
tags:
- sentence-transformers
- sentence-similarity
---
## Testing Sentence Transformer |
patrickvonplaten/wav2vec2-large-xlsr-turkish-demo | 0cc299a461e1a1972944829fc97788f88b25d18c | 2021-10-19T14:00:49.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-large-xlsr-turkish-demo | 19 | 0 | transformers | 8,582 | ## XLSR-Wav2Vec2 Fine-Tuned on Turkish Common Voice dataset
The model was fine-tuned in a google colab for demonstration purposes.
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for more information about the model. |
persiannlp/mt5-small-parsinlu-qqp-query-paraphrasing | b21c620f16d3b1306e349fb8543ad09493f5d3d1 | 2021-09-23T16:20:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"transformers",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-small-parsinlu-qqp-query-paraphrasing | 19 | null | transformers | 8,583 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-small-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pierreguillou/byt5-small-qa-squad-v1.1-portuguese | 04363d2c3adfae5ec68828147b5826b17c13e3f1 | 2021-12-05T15:42:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:squad",
"arxiv:1907.06292",
"arxiv:2105.13626",
"transformers",
"byt5",
"qa",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | pierreguillou | null | pierreguillou/byt5-small-qa-squad-v1.1-portuguese | 19 | 2 | transformers | 8,584 | ---
language: pt
license: apache-2.0
tags:
- text2text-generation
- byt5
- pytorch
- qa
datasets: squad
metrics: squad
widget:
- text: 'question: "Quando começou a pandemia de Covid-19 no mundo?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."'
- text: 'question: "Onde foi descoberta a Covid-19?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."'
---
# ByT5 small finetuned for Question Answering (QA) on SQUaD v1.1 Portuguese

Check our other QA models in Portuguese finetuned on SQUAD v1.1:
- [Portuguese BERT base cased QA](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese)
- [Portuguese BERT large cased QA](https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese)
- [Portuguese T5 base QA](https://huggingface.co/pierreguillou/t5-base-qa-squad-v1.1-portuguese)
## Introduction
The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) on Google Colab from the language model [ByT5 small](https://huggingface.co/google/byt5-small) of Google.
## About ByT5
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
## Informations on the method used
All the informations are in the blog post : ...
## Notebooks in Google Colab & GitHub
- Google Colab: ...
- GitHub: ...
## Performance
The results obtained are the following:
```
f1 = ...
exact match = ...
```
## How to use the model... with Pipeline
```python
import transformers
from transformers import pipeline
model_name = 'pierreguillou/byt5-small-qa-squad-v1.1-portuguese'
nlp = pipeline("text2text-generation", model=model_name)
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
input_text = r"""
question: "Quando começou a pandemia de Covid-19 no mundo?"
context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
"""
input_text = input_text.replace('\n','')
input_text
# question: "Quando começou a pandemia de Covid-19 no mundo?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
result = nlp(input_text)
result
# [{'generated_text': '1 de dezembro de 2019'}]
```
## How to use the model... with the Auto classes
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'pierreguillou/byt5-small-qa-squad-v1.1-portuguese'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
input_text = r"""
question: "Quando começou a pandemia de Covid-19 no mundo?"
context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
"""
input_text = input_text.replace('\n','')
input_text
# question: "Quando começou a pandemia de Covid-19 no mundo?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
input_ids = tokenizer(input_text, return_tensors='pt').input_ids
outputs = model.generate(
input_ids,
max_length=64,
num_beams=1
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
result
# 1 de dezembro de 2019
```
## Limitations and bias
The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases.
## Author
Portuguese ByT5 small QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations. In particular: [Google AI](https://huggingface.co/google), [Hugging Face](https://huggingface.co/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) and [Google Colab](https://colab.research.google.com/).
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{pierreguillou2021byt5smallsquadv11portuguese,
title={Portuguese ByT5 small QA (Question Answering), finetuned on SQUAD v1.1},
author={Pierre Guillou},
year={2021}
}
``` |
princeton-nlp/densephrases-multi | e842d544599752023df27be816b5f4e6e8d1263e | 2021-09-20T15:27:15.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/densephrases-multi | 19 | null | transformers | 8,585 | Entry not found |
priyank/Generate_instructions_t5 | 312e714332f0c11fb802696b79a6c68d926a4548 | 2021-05-13T14:28:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | priyank | null | priyank/Generate_instructions_t5 | 19 | null | transformers | 8,586 |
```
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
def set_seed(seed):
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
set_seed(42)
model = T5ForConditionalGeneration.from_pretrained("priyank/Generate_instructions_t5")
tokenizer = T5Tokenizer.from_pretrained("priyank/Generate_instructions_t5")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
sentence = "ask user to provide his date of birth"
text = "paraphrase: " + sentence + " </s>"
max_len = 256
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
# set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
beam_outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
do_sample=True,
max_length=256,
top_k=120,
top_p=0.98,
early_stopping=True,
num_return_sequences=10
)
print ("\\
Apprentice Query ::")
print (sentence)
print ("\\
Auto Generated Instruction ::")
final_outputs =[]
for beam_output in beam_outputs:
sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
if sent.lower() != sentence.lower() and sent not in final_outputs:
final_outputs.append(sent)
for i, final_output in enumerate(final_outputs):
print("{}: {}".format(i, final_output))
Apprentice Query ::
if balance is greater than $100, then tell the user he needs more balance
Auto Generated Instruction ::
0: IF (assert(user.balance > $100)) THEN (say you need more balance)
```
Reference: https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer- |
pszemraj/GPT-Converse-1pt3B-Neo-WoW-DD-17 | ee4db57ced86ad96683fe8078171a9396e502e41 | 2022-01-19T01:22:11.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"dataset:natural questions",
"transformers",
"gpt2",
"gpt",
"license:mit"
] | text-generation | false | pszemraj | null | pszemraj/GPT-Converse-1pt3B-Neo-WoW-DD-17 | 19 | null | transformers | 8,587 | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- natural questions
widget:
- text: "hi, how are you doing bruh?\nperson beta:\n\n"
example_title: "greeting"
- text: "Can you actually take me for dinner somewhere nice this time?\nperson beta:\n\n"
example_title: "dinner"
- text: "Honey, I have clogged the toilet for the third time this month.. sorry..\nperson beta:\n\n"
example_title: "overflow"
- text: "A man pushes his car to a hotel and tells the owner he’s bankrupt. Why?\nperson beta:\n\n"
example_title: "brain teaser"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.7
no_repeat_ngram_size: 3
do_sample: True
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
---
# GPT-Neo 1.3 B Conversational - 17 total epochs
- trained on the Wizard of Wikipedia parl.ai dataset + Daily Dialogues dataset
- 13 on WoW 4 on Daily Dialogues
- the aim is to use the model as a customizable chatbot with the personID labels as pseudo-SOT/EOT tokens, i.e. ending the prompt with `person beta:` means that it is extremely likely that _person beta:_ responds, as opposed to the entered prompt being added on to.
- a link to the project repo that details how to effectively use such a trained model is [here](https://github.com/pszemraj/ai-msgbot) |
r2d2/stsb-bertweet-base-v0 | 138d1944346ccb7ab2e2eae5d4d2827bce568a95 | 2022-02-18T14:53:45.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | r2d2 | null | r2d2/stsb-bertweet-base-v0 | 19 | null | sentence-transformers | 8,588 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# r2d2/stsb-bertweet-base-v0
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('r2d2/stsb-bertweet-base-v0')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('r2d2/stsb-bertweet-base-v0')
model = AutoModel.from_pretrained('r2d2/stsb-bertweet-base-v0')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=r2d2/stsb-bertweet-base-v0)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
rohanrajpal/bert-base-en-es-codemix-cased | 58341c89159c26603beab1ae726bf9528e6cc52c | 2021-05-19T00:26:38.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"es",
"en",
"dataset:SAIL 2017",
"transformers",
"codemix",
"license:apache-2.0"
] | text-classification | false | rohanrajpal | null | rohanrajpal/bert-base-en-es-codemix-cased | 19 | null | transformers | 8,589 | ---
language:
- es
- en
tags:
- es
- en
- codemix
license: "apache-2.0"
datasets:
- SAIL 2017
metrics:
- fscore
- accuracy
- precision
- recall
---
# BERT codemixed base model for spanglish (cased)
This model was built using [lingualytics](https://github.com/lingualytics/py-lingualytics), an open-source library that supports code-mixed analytics.
## Model description
Input for the model: Any codemixed spanglish text
Output for the model: Sentiment. (0 - Negative, 1 - Neutral, 2 - Positive)
I took a bert-base-multilingual-cased model from Huggingface and finetuned it on [CS-EN-ES-CORPUS](http://www.grupolys.org/software/CS-CORPORA/cs-en-es-corpus-wassa2015.txt) dataset.
Performance of this model on the dataset
| metric | score |
|------------|----------|
| acc | 0.718615 |
| f1 | 0.71759 |
| acc_and_f1 | 0.718103 |
| precision | 0.719302 |
| recall | 0.718615 |
## Intended uses & limitations
Make sure to preprocess your data using [these methods](https://github.com/microsoft/GLUECoS/blob/master/Data/Preprocess_Scripts/preprocess_sent_en_es.py) before using this model.
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import BertTokenizer, BertModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = AutoModelForSequenceClassification.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in *TensorFlow*:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
model = TFBertModel.from_pretrained('rohanrajpal/bert-base-en-es-codemix-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Limitations and bias
Since I dont know spanish, I cant verify the quality of annotations or the dataset itself. This is a very simple transfer learning approach and I'm open to discussions to improve upon this.
## Training data
I trained on the dataset on the [bert-base-multilingual-cased model](https://huggingface.co/bert-base-multilingual-cased).
## Training procedure
Followed the preprocessing techniques followed [here](https://github.com/microsoft/GLUECoS/blob/master/Data/Preprocess_Scripts/preprocess_sent_en_es.py)
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
|
samirt8/wav2vec2-xls-r-1b-fr | 742321f4cd1e2d07acaa56332f67088d61ab967c | 2022-03-23T14:16:05.000Z | [
"pytorch"
] | null | false | samirt8 | null | samirt8/wav2vec2-xls-r-1b-fr | 19 | 1 | null | 8,590 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- fr
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-1B - French
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER (without LM)
type: wer
value: 15.405483405483405
- name: Test CER (without LM)
type: cer
value: 4.877303022528913
- name: Test WER (with LM)
type: wer
value: 12.5
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 24.45
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 25.96
---
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
It achieves the following results on the evaluation set:
**Without LM**:
- Wer: 0.154
**With LM**:
- Wer: 0.125 |
savasy/mt5-mlsum-turkish-summarization | a32cbb40a2e6d4d923e7c0a54ab4050141fd872b | 2022-01-07T08:53:23.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | savasy | null | savasy/mt5-mlsum-turkish-summarization | 19 | 1 | transformers | 8,591 | This checkpoint has been trained with the Turkish part of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) where google/mt5 is the main Pre-trained checkpoint. [SimpleT5](https://github.com/Shivanandroy/simpleT5) library is used for training.
Here is the code snippet for training
```
model = SimpleT5()
model.from_pretrained("mt5","google/mt5-small")
model.train(train_df=train2, # pandas dataframe with 2 columns: source_text & target_text
eval_df=validation2, # pandas dataframe with 2 columns: source_text & target_text
source_max_token_len = 512,
target_max_token_len = 128,
batch_size = 8,
max_epochs = 5,
use_gpu = True,
outputdir = "mt5_mlsum_turkish",
early_stopping_patience_epochs = 0,
precision = 32
)
```
|
sdadas/polish-bart-base | 0710ce4e41f96e6f7897ecb2e51a9d947f86ef98 | 2022-02-19T10:34:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:lgpl-3.0",
"autotrain_compatible"
] | text2text-generation | false | sdadas | null | sdadas/polish-bart-base | 19 | null | transformers | 8,592 | ---
license: lgpl-3.0
---
|
seduerr/mt5-paraphrases-espanol | e6abf1971cdc792488a42bde65f186681f0331de | 2021-06-23T16:37:25.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/mt5-paraphrases-espanol | 19 | null | transformers | 8,593 | Entry not found |
shtoshni/spanbert_coreference_base | 99402ad31ea95a6a33641c2db0e8b164c53e890b | 2020-11-08T02:11:42.000Z | [
"pytorch",
"transformers"
] | null | false | shtoshni | null | shtoshni/spanbert_coreference_base | 19 | null | transformers | 8,594 | Entry not found |
sismetanin/xlm_roberta_base-ru-sentiment-rureviews | 4da6a56b98bd2af912d7e23e857c27d20040eac7 | 2021-02-25T23:51:22.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_base-ru-sentiment-rureviews | 19 | null | transformers | 8,595 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XLM-RoBERTa-Base-ru-sentiment-RuReviews
XLM-RoBERTa-Base-ru-sentiment-RuReviews is a [XLM-RoBERTa-Base](https://huggingface.co/xlm-roberta-base) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` |
speech-seq2seq/wav2vec2-2-gpt2-medium-no-adapter-frozen-enc | d2b273a6f537540b5bfa13ab1b9c1b3b39b3bb68 | 2022-02-17T03:04:18.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-gpt2-medium-no-adapter-frozen-enc | 19 | null | transformers | 8,596 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5541
- Wer: 1.9877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9364 | 0.28 | 500 | 6.3613 | 1.9833 |
| 1.941 | 0.56 | 1000 | 5.6974 | 1.9746 |
| 2.3312 | 0.84 | 1500 | 5.6979 | 1.7345 |
| 2.8004 | 1.12 | 2000 | 6.0436 | 1.6787 |
| 3.0003 | 1.4 | 2500 | 6.0955 | 1.7625 |
| 2.9677 | 1.68 | 3000 | 6.2841 | 1.6731 |
| 2.2759 | 1.96 | 3500 | 6.3094 | 1.7494 |
| 2.2989 | 2.24 | 4000 | 6.9891 | 1.9115 |
| 1.8814 | 2.52 | 4500 | 6.9818 | 1.9832 |
| 2.658 | 2.8 | 5000 | 6.5541 | 1.9877 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
superb/hubert-large-superb-ic | 8da7cdb18a459d147eee99b98f8840c4af619846 | 2021-09-04T20:48:25.000Z | [
"pytorch",
"hubert",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/hubert-large-superb-ic | 19 | null | transformers | 8,597 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- hubert
license: apache-2.0
---
# Hubert-Large for Intent Classification
## Model description
This is a ported version of [S3PRL's Hubert for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands).
The base model is [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of
speakers. SUPERB uses the
[Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/)
dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands).
## Usage examples
You can use the model directly like so:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ic", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-large-superb-ic")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-large-superb-ic")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
action_ids = torch.argmax(logits[:, :6], dim=-1).tolist()
action_labels = [model.config.id2label[_id] for _id in action_ids]
object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist()
object_labels = [model.config.id2label[_id + 6] for _id in object_ids]
location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist()
location_labels = [model.config.id2label[_id + 20] for _id in location_ids]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9876` | `N/A` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
superman/testingmodel | 907d032474939d8d6cee939ed5524cbc89df2495 | 2021-09-28T20:21:40.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | superman | null | superman/testingmodel | 19 | null | transformers | 8,598 | just to test |
symanto/mpnet-base-snli-mnli | f35aedb2691bc05b3b48a170a0f2bad910f638dd | 2021-09-30T12:29:12.000Z | [
"pytorch",
"mpnet",
"text-classification",
"en",
"dataset:SNLI",
"dataset:MNLI",
"transformers",
"zero-shot-classification"
] | text-classification | false | symanto | null | symanto/mpnet-base-snli-mnli | 19 | 2 | transformers | 8,599 | ---
language:
- en
datasets:
- SNLI
- MNLI
tags:
- zero-shot-classification
---
A cross-attention NLI model trained for zero-shot and few-shot text classification.
The base model is [mpnet-base](https://huggingface.co/microsoft/mpnet-base), trained with the code from [here](https://github.com/facebookresearch/anli);
on [SNLI](https://nlp.stanford.edu/projects/snli/) and [MNLI](https://cims.nyu.edu/~sbowman/multinli/).
Usage:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
import numpy as np
model = AutoModelForSequenceClassification.from_pretrained("symanto/mpnet-base-snli-mnli")
tokenizer = AutoTokenizer.from_pretrained("symanto/mpnet-base-snli-mnli")
input_pairs = [("I like this pizza.", "The sentence is positive."), ("I like this pizza.", "The sentence is negative.")]
inputs = tokenizer(["</s></s>".join(input_pair) for input_pair in input_pairs], return_tensors="pt")
logits = model(**inputs).logits
probs = torch.softmax(logits, dim=1).tolist()
print("probs", probs)
np.testing.assert_almost_equal(probs, [[0.86, 0.14, 0.00], [0.16, 0.15, 0.69]], decimal=2)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.