modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Harveenchadha/vakyansh-wav2vec2-rajasthani-raj-45 | fc46dea3821e439ca109f613e771344c5820b3a7 | 2021-12-17T17:58:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-rajasthani-raj-45 | 2 | null | transformers | 23,100 | Entry not found |
Harveenchadha/vakyansh-wav2vec2-telugu-tem-100 | f0b4778462800eaa70163bfee6bf97710cc28f27 | 2021-08-02T19:00:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-telugu-tem-100 | 2 | null | transformers | 23,101 | Entry not found |
Heldhy/testingAgain | 6f277ff120f2c32823a7a82e5c56e1cc628e4e79 | 2022-01-10T13:05:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Heldhy | null | Heldhy/testingAgain | 2 | null | transformers | 23,102 | ---
tags:
- conversational
---
# My Awesome Model |
Heldhy/wav2vec2-base-timit-demo-colab | e87cccf584fe123d621686472f288fb2b914642a | 2022-01-10T14:36:58.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Heldhy | null | Heldhy/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 23,103 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4568
- Wer: 0.3422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3896 | 4.0 | 500 | 1.1573 | 0.8886 |
| 0.5667 | 8.0 | 1000 | 0.4841 | 0.4470 |
| 0.2126 | 12.0 | 1500 | 0.4201 | 0.3852 |
| 0.1235 | 16.0 | 2000 | 0.4381 | 0.3623 |
| 0.0909 | 20.0 | 2500 | 0.4784 | 0.3748 |
| 0.0611 | 24.0 | 3000 | 0.4390 | 0.3577 |
| 0.0454 | 28.0 | 3500 | 0.4568 | 0.3422 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Helsinki-NLP/opus-mt-bcl-fr | 3802edebeca6853fe87f6f0f6aa77437cd5c3846 | 2021-09-09T21:26:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bcl",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bcl-fr | 2 | null | transformers | 23,104 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-fr
* source languages: bcl
* target languages: fr
* OPUS readme: [bcl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.fr | 35.0 | 0.527 |
|
Helsinki-NLP/opus-mt-bg-uk | 4615b57ec32e8f73c9a69d19c512f7d260ff7b91 | 2021-01-18T07:51:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bg-uk | 2 | null | transformers | 23,105 | ---
language:
- bg
- uk
tags:
- translation
license: apache-2.0
---
### bul-ukr
* source group: Bulgarian
* target group: Ukrainian
* OPUS readme: [bul-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md)
* model: transformer-align
* source language(s): bul
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.ukr | 49.2 | 0.683 |
### System Info:
- hf_name: bul-ukr
- source_languages: bul
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'uk']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt
- src_alpha3: bul
- tgt_alpha3: ukr
- short_pair: bg-uk
- chrF2_score: 0.6829999999999999
- bleu: 49.2
- brevity_penalty: 0.983
- ref_len: 4932.0
- src_name: Bulgarian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: bg
- tgt_alpha2: uk
- prefer_old: False
- long_pair: bul-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-cs-eo | 1e23bed4e0074c567e4508059f4b8034e0319105 | 2021-01-18T07:55:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cs-eo | 2 | null | transformers | 23,106 | ---
language:
- cs
- eo
tags:
- translation
license: apache-2.0
---
### ces-epo
* source group: Czech
* target group: Esperanto
* OPUS readme: [ces-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-epo/README.md)
* model: transformer-align
* source language(s): ces
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ces.epo | 26.0 | 0.459 |
### System Info:
- hf_name: ces-epo
- source_languages: ces
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['cs', 'eo']
- src_constituents: {'ces'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.test.txt
- src_alpha3: ces
- tgt_alpha3: epo
- short_pair: cs-eo
- chrF2_score: 0.45899999999999996
- bleu: 26.0
- brevity_penalty: 0.94
- ref_len: 24901.0
- src_name: Czech
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: cs
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ces-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-de-lt | e0105109d696baf37e2a4cca511a46f59fa97707 | 2021-09-09T21:32:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"lt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-lt | 2 | null | transformers | 23,107 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-lt
* source languages: de
* target languages: lt
* OPUS readme: [de-lt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-lt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.lt | 37.9 | 0.633 |
|
Helsinki-NLP/opus-mt-de-ny | 595549133dfde470a3ea04e93674ff1c90c5ac5a | 2021-09-09T21:32:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"ny",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-ny | 2 | null | transformers | 23,108 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ny
* source languages: de
* target languages: ny
* OPUS readme: [de-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ny/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ny/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ny | 21.4 | 0.481 |
|
Helsinki-NLP/opus-mt-en-pqw | e63c061ce57192d261cc19a46c0fe0c2678eb790 | 2021-01-18T08:14:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"pqw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-pqw | 2 | null | transformers | 23,109 | ---
language:
- en
- pqw
tags:
- translation
license: apache-2.0
---
### eng-pqw
* source group: English
* target group: Western Malayo-Polynesian languages
* OPUS readme: [eng-pqw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqw/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb cha dtp hil iba ilo ind jav jav_Java mad max_Latn min mlg pag pau sun tmw_Latn war zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 3.0 | 0.143 |
| Tatoeba-test.eng-ceb.eng.ceb | 11.4 | 0.432 |
| Tatoeba-test.eng-cha.eng.cha | 1.4 | 0.189 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.6 | 0.139 |
| Tatoeba-test.eng-hil.eng.hil | 17.7 | 0.525 |
| Tatoeba-test.eng-iba.eng.iba | 14.6 | 0.365 |
| Tatoeba-test.eng-ilo.eng.ilo | 34.0 | 0.590 |
| Tatoeba-test.eng-jav.eng.jav | 6.2 | 0.299 |
| Tatoeba-test.eng-mad.eng.mad | 2.6 | 0.154 |
| Tatoeba-test.eng-mlg.eng.mlg | 34.3 | 0.518 |
| Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.561 |
| Tatoeba-test.eng.multi | 17.5 | 0.422 |
| Tatoeba-test.eng-pag.eng.pag | 19.8 | 0.507 |
| Tatoeba-test.eng-pau.eng.pau | 1.2 | 0.129 |
| Tatoeba-test.eng-sun.eng.sun | 30.3 | 0.418 |
| Tatoeba-test.eng-war.eng.war | 12.6 | 0.439 |
### System Info:
- hf_name: eng-pqw
- source_languages: eng
- target_languages: pqw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'pqw']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: pqw
- short_pair: en-pqw
- chrF2_score: 0.42200000000000004
- bleu: 17.5
- brevity_penalty: 1.0
- ref_len: 66758.0
- src_name: English
- tgt_name: Western Malayo-Polynesian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: pqw
- prefer_old: False
- long_pair: eng-pqw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-mfe | 219ad5d7811a8ebdebc130810a0cffbeb307c172 | 2021-09-09T21:55:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"mfe",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-mfe | 2 | null | transformers | 23,110 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-mfe
* source languages: fr
* target languages: mfe
* OPUS readme: [fr-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mfe/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mfe | 26.1 | 0.451 |
|
Helsinki-NLP/opus-mt-guw-sv | c4c633c6753fa182a42f1751259e3be57fc320f4 | 2021-09-09T21:59:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"guw",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-guw-sv | 2 | null | transformers | 23,111 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-guw-sv
* source languages: guw
* target languages: sv
* OPUS readme: [guw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.sv | 31.2 | 0.498 |
|
Helsinki-NLP/opus-mt-it-lt | 26a8c917ebd56b458913eab87144f7e1099b44c5 | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"lt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-it-lt | 2 | null | transformers | 23,112 | ---
language:
- it
- lt
tags:
- translation
license: apache-2.0
---
### ita-lit
* source group: Italian
* target group: Lithuanian
* OPUS readme: [ita-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-lit/README.md)
* model: transformer-align
* source language(s): ita
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.lit | 38.1 | 0.652 |
### System Info:
- hf_name: ita-lit
- source_languages: ita
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'lt']
- src_constituents: {'ita'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.test.txt
- src_alpha3: ita
- tgt_alpha3: lit
- short_pair: it-lt
- chrF2_score: 0.652
- bleu: 38.1
- brevity_penalty: 0.9590000000000001
- ref_len: 1321.0
- src_name: Italian
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: it
- tgt_alpha2: lt
- prefer_old: False
- long_pair: ita-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ja-he | 2aa51fc3e068d90e5a719ae93aed18da46122e54 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-he | 2 | null | transformers | 23,113 | ---
language:
- ja
- he
tags:
- translation
license: apache-2.0
---
### jpn-heb
* source group: Japanese
* target group: Hebrew
* OPUS readme: [jpn-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-heb/README.md)
* model: transformer-align
* source language(s): jpn_Hani jpn_Hira jpn_Kana
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.heb | 20.2 | 0.397 |
### System Info:
- hf_name: jpn-heb
- source_languages: jpn
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'he']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: heb
- short_pair: ja-he
- chrF2_score: 0.397
- bleu: 20.2
- brevity_penalty: 1.0
- ref_len: 1598.0
- src_name: Japanese
- tgt_name: Hebrew
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: he
- prefer_old: False
- long_pair: jpn-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-nl-ca | 33ee8c40483bc1b75318a7a1eab0d4f88ddc0f4b | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nl",
"ca",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nl-ca | 2 | null | transformers | 23,114 | ---
language:
- nl
- ca
tags:
- translation
license: apache-2.0
---
### nld-cat
* source group: Dutch
* target group: Catalan
* OPUS readme: [nld-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.cat | 42.1 | 0.624 |
### System Info:
- hf_name: nld-cat
- source_languages: nld
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'ca']
- src_constituents: {'nld'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt
- src_alpha3: nld
- tgt_alpha3: cat
- short_pair: nl-ca
- chrF2_score: 0.624
- bleu: 42.1
- brevity_penalty: 0.988
- ref_len: 3942.0
- src_name: Dutch
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: nl
- tgt_alpha2: ca
- prefer_old: False
- long_pair: nld-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-rn-fr | b64d323d3036d4191e74b90b0683ce2c67e96dde | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"rn",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-rn-fr | 2 | null | transformers | 23,115 | ---
language:
- rn
- fr
tags:
- translation
license: apache-2.0
---
### run-fra
* source group: Rundi
* target group: French
* OPUS readme: [run-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md)
* model: transformer-align
* source language(s): run
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.fra | 18.2 | 0.397 |
### System Info:
- hf_name: run-fra
- source_languages: run
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'fr']
- src_constituents: {'run'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: fra
- short_pair: rn-fr
- chrF2_score: 0.397
- bleu: 18.2
- brevity_penalty: 1.0
- ref_len: 7496.0
- src_name: Rundi
- tgt_name: French
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: fr
- prefer_old: False
- long_pair: run-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-run-es | 1915cb3c1f53b0fae04befffc6ea8b5b6c544622 | 2021-09-10T14:02:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"run",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-run-es | 2 | null | transformers | 23,116 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-run-es
* source languages: run
* target languages: es
* OPUS readme: [run-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.es | 26.9 | 0.452 |
|
Helsinki-NLP/opus-mt-sl-uk | de36b5886fd286ae9a56d4536c446d7bb73000e0 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sl",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sl-uk | 2 | null | transformers | 23,117 | ---
language:
- sl
- uk
tags:
- translation
license: apache-2.0
---
### slv-ukr
* source group: Slovenian
* target group: Ukrainian
* OPUS readme: [slv-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-ukr/README.md)
* model: transformer-align
* source language(s): slv
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.slv.ukr | 10.6 | 0.236 |
### System Info:
- hf_name: slv-ukr
- source_languages: slv
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sl', 'uk']
- src_constituents: {'slv'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.test.txt
- src_alpha3: slv
- tgt_alpha3: ukr
- short_pair: sl-uk
- chrF2_score: 0.23600000000000002
- bleu: 10.6
- brevity_penalty: 1.0
- ref_len: 3906.0
- src_name: Slovenian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: sl
- tgt_alpha2: uk
- prefer_old: False
- long_pair: slv-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sv-eo | b6d5b9fdcaee1dd54570120e8f724faafa22aca6 | 2020-08-21T14:42:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-eo | 2 | null | transformers | 23,118 | ---
language:
- sv
- eo
tags:
- translation
license: apache-2.0
---
### swe-epo
* source group: Swedish
* target group: Esperanto
* OPUS readme: [swe-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/swe-epo/README.md)
* model: transformer-align
* source language(s): swe
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.swe.epo | 29.7 | 0.498 |
### System Info:
- hf_name: swe-epo
- source_languages: swe
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/swe-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sv', 'eo']
- src_constituents: {'swe'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.test.txt
- src_alpha3: swe
- tgt_alpha3: epo
- short_pair: sv-eo
- chrF2_score: 0.498
- bleu: 29.7
- brevity_penalty: 0.958
- ref_len: 10987.0
- src_name: Swedish
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: sv
- tgt_alpha2: eo
- prefer_old: False
- long_pair: swe-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sv-he | 804fe4f67cbb373619e4a9a053041e690dda272a | 2021-09-10T14:06:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-he | 2 | null | transformers | 23,119 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-he
* source languages: sv
* target languages: he
* OPUS readme: [sv-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-he/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-he/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-he/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-he/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.he | 23.1 | 0.440 |
|
Helsinki-NLP/opus-mt-sv-lu | 6ba0b38cc9116e3d5329b0210438bef031d6762b | 2021-09-10T14:07:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"lu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-lu | 2 | null | transformers | 23,120 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-lu
* source languages: sv
* target languages: lu
* OPUS readme: [sv-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-lu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-lu/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-lu/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-lu/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.lu | 24.8 | 0.484 |
|
Helsinki-NLP/opus-mt-sv-mfe | 16986d34a7dbb789e3906d2b65c9891354c39d36 | 2021-09-10T14:08:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"mfe",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-mfe | 2 | null | transformers | 23,121 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-mfe
* source languages: sv
* target languages: mfe
* OPUS readme: [sv-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-mfe/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-mfe/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mfe/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mfe/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.mfe | 24.3 | 0.445 |
|
Helsinki-NLP/opus-mt-sv-run | a5706fa6ebb50fe7f6129e47c30031193841d861 | 2021-09-10T14:09:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"run",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-run | 2 | null | transformers | 23,122 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-run
* source languages: sv
* target languages: run
* OPUS readme: [sv-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-run/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-run/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-run/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-run/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.run | 24.4 | 0.502 |
|
Helsinki-NLP/opus-mt-sv-tn | 9f2fc3a817597f6e20dc48dca76a4d07e22e3e7f | 2021-09-10T14:10:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"tn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-tn | 2 | null | transformers | 23,123 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-tn
* source languages: sv
* target languages: tn
* OPUS readme: [sv-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.tn | 36.3 | 0.561 |
|
Helsinki-NLP/opus-mt-tr-eo | daf22c4ed4a156351412d919b9b9e163c286013d | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tr-eo | 2 | null | transformers | 23,124 | ---
language:
- tr
- eo
tags:
- translation
license: apache-2.0
---
### tur-epo
* source group: Turkish
* target group: Esperanto
* OPUS readme: [tur-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-epo/README.md)
* model: transformer-align
* source language(s): tur
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.epo | 17.0 | 0.373 |
### System Info:
- hf_name: tur-epo
- source_languages: tur
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'eo']
- src_constituents: {'tur'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.test.txt
- src_alpha3: tur
- tgt_alpha3: epo
- short_pair: tr-eo
- chrF2_score: 0.373
- bleu: 17.0
- brevity_penalty: 0.8809999999999999
- ref_len: 33762.0
- src_name: Turkish
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: tr
- tgt_alpha2: eo
- prefer_old: False
- long_pair: tur-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-zlw-fiu | 420cb4a9da2a9c806ede890080697855915b94a7 | 2021-06-29T12:40:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dsb",
"cs",
"csb_Latn",
"hsb",
"pl",
"zlw",
"hu",
"vro",
"fi",
"liv_Latn",
"mdf",
"krl",
"fkv_Latn",
"mhr",
"et",
"sma",
"udm",
"vep",
"myv",
"kpv",
"se",
"izh",
"fiu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zlw-fiu | 2 | null | transformers | 23,125 | ---
language:
- dsb
- cs
- csb_Latn
- hsb
- pl
- zlw
- hu
- vro
- fi
- liv_Latn
- mdf
- krl
- fkv_Latn
- mhr
- et
- sma
- udm
- vep
- myv
- kpv
- se
- izh
- fiu
tags:
- translation
license: apache-2.0
---
### zlw-fiu
* source language name: West Slavic languages
* target language name: Finno-Ugrian languages
* OPUS readme: [README.md](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/README.md)
* model: transformer
* source language codes: dsb, cs, csb_Latn, hsb, pl, zlw
* target language codes: hu, vro, fi, liv_Latn, mdf, krl, fkv_Latn, mhr, et, sma, udm, vep, myv, kpv, se, izh, fiu
* dataset: opus
* release date: 2021-02-18
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2021-02-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.zip)
* a sentence-initial language token is required in the form of >>id<<(id = valid, usually three-letter target language ID)
* Training data:
* ces-fin: Tatoeba-train (1000000)
* ces-hun: Tatoeba-train (1000000)
* pol-est: Tatoeba-train (1000000)
* pol-fin: Tatoeba-train (1000000)
* pol-hun: Tatoeba-train (1000000)
* Validation data:
* ces-fin: Tatoeba-dev, 1000
* ces-hun: Tatoeba-dev, 1000
* est-pol: Tatoeba-dev, 1000
* fin-pol: Tatoeba-dev, 1000
* hun-pol: Tatoeba-dev, 1000
* mhr-pol: Tatoeba-dev, 461
* total-size-shuffled: 5426
* devset-selected: top 5000 lines of Tatoeba-dev.src.shuffled!
* Test data:
* newssyscomb2009.ces-hun: 502/9733
* newstest2009.ces-hun: 2525/54965
* Tatoeba-test.ces-fin: 88/408
* Tatoeba-test.ces-hun: 1911/10336
* Tatoeba-test.multi-multi: 4562/25497
* Tatoeba-test.pol-chm: 5/36
* Tatoeba-test.pol-est: 15/98
* Tatoeba-test.pol-fin: 609/3293
* Tatoeba-test.pol-hun: 1934/11285
* test set translations file: [test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.test.txt)
* test set scores file: [eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.eval.txt)
* BLEU-scores
|Test set|score|
|---|---|
|Tatoeba-test.ces-fin|57.2|
|Tatoeba-test.ces-hun|42.6|
|Tatoeba-test.multi-multi|39.4|
|Tatoeba-test.pol-hun|36.6|
|Tatoeba-test.pol-fin|36.1|
|Tatoeba-test.pol-est|20.9|
|newssyscomb2009.ces-hun|13.9|
|newstest2009.ces-hun|13.9|
|Tatoeba-test.pol-chm|2.0|
* chr-F-scores
|Test set|score|
|---|---|
|Tatoeba-test.ces-fin|0.71|
|Tatoeba-test.ces-hun|0.637|
|Tatoeba-test.multi-multi|0.616|
|Tatoeba-test.pol-hun|0.605|
|Tatoeba-test.pol-fin|0.592|
|newssyscomb2009.ces-hun|0.449|
|newstest2009.ces-hun|0.443|
|Tatoeba-test.pol-est|0.372|
|Tatoeba-test.pol-chm|0.007|
### System Info:
* hf_name: zlw-fiu
* source_languages: dsb,cs,csb_Latn,hsb,pl,zlw
* target_languages: hu,vro,fi,liv_Latn,mdf,krl,fkv_Latn,mhr,et,sma,udm,vep,myv,kpv,se,izh,fiu
* opus_readme_url: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/README.md
* original_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu']
* src_constituents: ['dsb', 'ces', 'csb_Latn', 'hsb', 'pol']
* tgt_constituents: ['hun', 'vro', 'fin', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'est', 'sma', 'udm', 'vep', 'myv', 'kpv', 'sme', 'izh']
* src_multilingual: True
* tgt_multilingual: True
* helsinki_git_sha: a0966db6db0ae616a28471ff0faf461b36fec07d
* transformers_git_sha: 3857f2b4e34912c942694489c2b667d9476e55f5
* port_machine: bungle
* port_time: 2021-06-29-15:24 |
Helsinki-NLP/opus-mt-zne-fi | 3d5c68815b2ff67b9681355bdf8f5c318cb863b2 | 2021-09-11T10:53:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zne",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zne-fi | 2 | null | transformers | 23,126 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-zne-fi
* source languages: zne
* target languages: fi
* OPUS readme: [zne-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.fi | 22.8 | 0.432 |
|
Helsinki-NLP/opus-tatoeba-af-ru | d6b635deae3dd0350db0b6c40d1921a2886a2de4 | 2021-02-12T13:01:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-af-ru | 2 | null | transformers | 23,127 | ---
language:
- af
- ru
tags:
- translation
license: apache-2.0
---
### af-ru
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-09-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.zip)
* test set translations: [opus-2020-09-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.test.txt)
* test set scores: [opus-2020-09-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: af-ru
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: ('Afrikaans', {'afr'})
- tgt_constituents: ('Russian', {'rus'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: afr-rus
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-01-01 00:00:00
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- short_pair: af-ru
- helsinki_git_sha: e8c308a96c1bd0b4ca6a8ce174783f93c3e30f25
- transformers_git_sha: 31245775e5772fbded1ac07ed89fbba3b5af0cb9
- port_machine: LM0-400-22516.local
- port_time: 2021-02-12-14:52 |
HeyLucasLeao/byt5-base-pt-product-reviews | 55dee99ce5fba70acafe892f53e6bf8a9df335a4 | 2021-08-25T17:02:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2105.13626",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | HeyLucasLeao | null | HeyLucasLeao/byt5-base-pt-product-reviews | 2 | 1 | transformers | 23,128 | Create README.md
## ByT5 Base Portuguese Product Reviews
#### Model Description
This is a finetuned version from ByT5 Base by Google for Sentimental Analysis from Product Reviews in Portuguese.
##### Paper: https://arxiv.org/abs/2105.13626
#### Training data
It was trained from products reviews from a Americanas.com. You can found the data here: https://github.com/HeyLucasLeao/finetuning-byt5-model.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.
##### Learning Rate: **1e-4**
##### Epochs: **1**
##### Colab for Finetuning: https://drive.google.com/file/d/17TcaN52moq7i7TE2EbcVbwQEQuAIQU63/view?usp=sharing
##### Colab for Metrics: https://colab.research.google.com/drive/1wbTDfOsE45UL8Q3ZD1_FTUmdVOKCcJFf#scrollTo=S4nuLkAFrlZ6
#### Score:
```python
Training Set:
'accuracy': 0.9019706922688226,
'f1': 0.9305820610687022,
'precision': 0.9596555965559656,
'recall': 0.9032183375781431
Test Set:
'accuracy': 0.9019409684035312,
'f1': 0.9303758732034697,
'precision': 0.9006660401258529,
'recall': 0.9621126145787866
Validation Set:
'accuracy': 0.9044948078526491,
'f1': 0.9321924443009364,
'precision': 0.9024426549173129,
'recall': 0.9639705531617191
```
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/byt5-base-pt-product-reviews")
model = AutoModelForSeq2SeqLM.from_pretrained("HeyLucasLeao/byt5-base-pt-product-reviews")
model.to(device)
def classificar_review(review):
inputs = tokenizer([review], padding='max_length', truncation=True, max_length=512, return_tensors='pt')
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
pred = np.argmax(output.cpu(), axis=1)
dici = {0: 'Review Negativo', 1: 'Review Positivo'}
return dici[pred.item()]
classificar_review(review)
``` |
Holako/NER_model_holako | b358aa4f389cd368c3c312ccb06c181d8b90df7c | 2022-02-23T09:07:06.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Holako | null | Holako/NER_model_holako | 2 | null | transformers | 23,129 |
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Holako/NER_model_holako")
model = AutoModelForTokenClassification.from_pretrained("Holako/NER_model_holako")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "اسمي احمد"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
=======
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
|
HungChau/bert_concept_extraction | df0c5b53cd8673d99a95a2cdf6b2b19fc0dfdcb1 | 2021-09-03T19:23:40.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/bert_concept_extraction | 2 | null | transformers | 23,130 | Entry not found |
HungChau/bert_concept_extraction_iir_from_kp20k_v1.1 | baa092caeb8f51807aa45d681bf933331b08fe0a | 2021-10-06T14:38:09.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/bert_concept_extraction_iir_from_kp20k_v1.1 | 2 | null | transformers | 23,131 | Entry not found |
HungChau/bert_concept_extraction_kp20k_from_iir_v1.1 | 1ad0900d465ec5dcd808f816da5824868b8b4d22 | 2021-10-06T15:51:21.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/bert_concept_extraction_kp20k_from_iir_v1.1 | 2 | null | transformers | 23,132 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-iir-v1.3 | d92fb1748d98591764f6fb393b26ce2c1c74df94 | 2021-11-17T01:30:24.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-iir-v1.3 | 2 | null | transformers | 23,133 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.2-concept-extraction-wikipedia-v1.2 | da0c9ebdfbef63d9cbb2dc9ece2380b0f5dbfbf9 | 2021-11-18T19:35:39.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.2-concept-extraction-wikipedia-v1.2 | 2 | null | transformers | 23,134 | Entry not found |
HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.2 | 644bfaad5ba5509e4988c10462a10d359ca6f926 | 2021-11-16T09:53:17.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-kp20k-v1.2 | 2 | null | transformers | 23,135 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-iir-v1.0-concept-extraction-kp20k-v1.0 | 87483822b51ac82aa8c147bea6c92176c19e945e | 2021-09-25T01:32:32.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-iir-v1.0-concept-extraction-kp20k-v1.0 | 2 | null | transformers | 23,136 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-iir-v1.2-concept-extraction-kp20k-v1.2 | 232a1b6d684a28986fb9124eb6eb12522c742ab8 | 2021-11-18T12:33:39.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-iir-v1.2-concept-extraction-kp20k-v1.2 | 2 | null | transformers | 23,137 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-iir-v1.2 | ad4169f713d8c55f8b1b82bee7986e9fb6ccddd8 | 2021-11-16T00:43:30.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-iir-v1.2 | 2 | null | transformers | 23,138 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extraction-iir-v1.0 | 3bf7ff93daf202942fdec598d76c4b1ba36dedc3 | 2021-09-24T15:32:30.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extraction-iir-v1.0 | 2 | null | transformers | 23,139 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.0 | b70df2e5ea843bfd3539b6b8b02fff3eb7a274d8 | 2021-11-01T21:05:49.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0-concept-extraction-wikipedia-v1.0 | 2 | null | transformers | 23,140 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0 | 07244c172ce4c7c968c5b627199e51f314a7a4b3 | 2021-09-24T02:40:06.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-kp20k-v1.0 | 2 | null | transformers | 23,141 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.0-concept-extraction-iir-v1.0 | 6c2ce6230ddc9ae97f2d9cea5e29b4bd3415407a | 2021-11-02T23:34:28.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.0-concept-extraction-iir-v1.0 | 2 | null | transformers | 23,142 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.0-concept-extraction-iir-v1.3 | 36766502f8a059c9646b7b1eff57b9ad4d0225bd | 2021-11-18T03:56:11.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.0-concept-extraction-iir-v1.3 | 2 | null | transformers | 23,143 | Entry not found |
HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.0-concept-extraction-kp20k-v1.0 | 59c48bdb0c1739b12d3c71d0f85f7ea81888d8eb | 2021-11-03T04:13:16.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HungChau | null | HungChau/distilbert-base-uncased-concept-extraction-wikipedia-v1.0-concept-extraction-kp20k-v1.0 | 2 | null | transformers | 23,144 | Entry not found |
HypNyx/DialoGPT-small-Thanos | 14d0b4172cdf015bdf96d3aeda61b8e15dc9ff04 | 2021-09-02T15:18:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | HypNyx | null | HypNyx/DialoGPT-small-Thanos | 2 | null | transformers | 23,145 | ---
tags:
- conversational
---
#Thanos DialoGPT Model |
IMJONEZZ/SlovenBERTcina | 141147e6796f8455ea9546b0df84fb7ed516c5fd | 2021-07-29T05:26:25.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | IMJONEZZ | null | IMJONEZZ/SlovenBERTcina | 2 | 1 | transformers | 23,146 | #Slovak RoBERTA Masked Language Model
###83Mil Parameters in small model
Medium and Large models coming soon!
RoBERTA pretrained tokenizer vocab and merges included.
---
##Training params:
- **Dataset**:
8GB Slovak Monolingual dataset including ParaCrawl (monolingual), OSCAR, and several gigs of my own findings and cleaning.
- **Preprocessing**:
Tokenized with a pretrained ByteLevelBPETokenizer trained on the same dataset. Uncased, with s, pad, /s, unk, and mask special tokens.
- **Evaluation results**:
- Mnoho ľudí tu<mask>
* žije.
* žijú.
* je.
* trpí.
- Ako sa<mask>
* máte
* máš
* má
* hovorí
- Plážová sezóna pod Zoborom patrí medzi<mask> obdobia.
* ročné
* najkrajšie
* najobľúbenejšie
* najnáročnejšie
- **Limitations**:
The current model is fairly small, although it works very well. This model is meant to be finetuned on downstream tasks e.g. Part-of-Speech tagging, Question Answering, anything in GLUE or SUPERGLUE.
- **Credit**:
If you use this or any of my models in research or professional work, please credit me - Christopher Brousseau in said work. |
Ife/ES-CA | 7a49d9fa6cc5cb22fb4e0b709da20d856c90557b | 2021-09-16T02:54:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ife | null | Ife/ES-CA | 2 | null | transformers | 23,147 | Entry not found |
Ifenna/dbert-3epoch | 5ea0711ed83eca3d06b6606e14e576fe5951fece | 2021-07-24T23:48:06.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Ifenna | null | Ifenna/dbert-3epoch | 2 | null | transformers | 23,148 | A distilbert model fine-tuned for question answering. |
Ilyabarigou/Genesis-harrybotter | b6805e16df88ec4e417c21ffc0819ed73afd782a | 2021-09-02T16:37:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ilyabarigou | null | Ilyabarigou/Genesis-harrybotter | 2 | null | transformers | 23,149 | ---
tags:
- conversational
---
# Harry Botter Model |
InfoCoV/Cro-CoV-BERTic | 9e9fe8f5e4158beb723b3ddfc243129bd4e55aba | 2022-02-11T14:20:05.000Z | [
"pytorch",
"tensorboard",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | InfoCoV | null | InfoCoV/Cro-CoV-BERTic | 2 | null | transformers | 23,150 | Entry not found |
Iskaj/300m_cv8.0_nl_base | 0ecba7407414dd88fd41c7ef947cbb2ff9c09579 | 2022-02-04T11:38:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Iskaj | null | Iskaj/300m_cv8.0_nl_base | 2 | null | transformers | 23,151 | Entry not found |
Iskaj/newnew | a4e2a5607f3a15fa9e0549e9d3a6a137e8d4bd25 | 2022-02-02T20:02:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Iskaj | null | Iskaj/newnew | 2 | null | transformers | 23,152 | ---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: newnew
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newnew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4375
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
Iskaj/w2v-xlsr-dutch-lm-added | ad70d5cf7b6cfecf7d6c3b126ebb934d3e01b9c3 | 2022-01-27T15:58:50.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Iskaj | null | Iskaj/w2v-xlsr-dutch-lm-added | 2 | null | transformers | 23,153 | Copy of "facebook/wav2vec2-large-xlsr-53-dutch"
|
Iskaj/xlsr_300m_CV_8.0_50_EP_new_params_nl | 6b324dc6c213528ff5f14a71976adf2c1529fa01 | 2022-03-23T18:34:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Iskaj | null | Iskaj/xlsr_300m_CV_8.0_50_EP_new_params_nl | 2 | null | transformers | 23,154 |
---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- nl
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Dutch
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8 NL
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 35.44
- name: Test CER
type: cer
value: 19.57
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 37.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 38.73
---
# xlsr_300m_CV_8.0_50_EP_new_params_nl |
Istiaque190515/harry_potter | 4dd47fe77b010c7fb9f218baf3ed612d45fd2f91 | 2021-09-18T15:56:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Istiaque190515 | null | Istiaque190515/harry_potter | 2 | null | transformers | 23,155 | ---
tags:
- conversational
---
#harry_potter |
Itcast/cnc_output | ae06d81c9eec3b21305b2e74743515ee5c0fd14f | 2020-01-01T15:20:04.000Z | [
"pytorch",
"transformers"
] | null | false | Itcast | null | Itcast/cnc_output | 2 | null | transformers | 23,156 | Entry not found |
Jacobo/axiothea | 7049fecffde2af7481fcb78cc22149be9f0be59d | 2021-11-15T20:07:05.000Z | [
"pytorch",
"roberta",
"fill-mask",
"grc",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jacobo | null | Jacobo/axiothea | 2 | null | transformers | 23,157 | ---
tags:
- generated_from_trainer
language:
- grc
model-index:
- name: dioBERTo
results: []
widget:
- text: "Πλάτων ὁ Περικτιόνης <mask> γένος ἀνέφερεν εἰς Σόλωνα."
- text: "ὁ Κριτίας ἀπέβλεψε <mask> τὴν θύραν."
- text: "Ὦ φίλε Κλεινία, καλῶς μὲν <mask>."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# axiothea
This is an experimental roberta model trained with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed. The training dataset will be soon available in the Huggingface datasets hub. Training a model of ancient Greek is challenging given that it is a low resource language from which 50% of the register has only survived in fragmentary texts. The model is provided by the Diogenet project at the University of California, San Diego.
It achieves the following results on the evaluation set:
- Loss: 3.3351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7013 | 1.0 | 341422 | 4.8813 |
| 4.2866 | 2.0 | 682844 | 4.4422 |
| 4.0496 | 3.0 | 1024266 | 4.2132 |
| 3.8503 | 4.0 | 1365688 | 4.0246 |
| 3.6917 | 5.0 | 1707110 | 3.8756 |
| 3.4917 | 6.0 | 2048532 | 3.7381 |
| 3.3907 | 7.0 | 2389954 | 3.6107 |
| 3.2876 | 8.0 | 2731376 | 3.5044 |
| 3.1994 | 9.0 | 3072798 | 3.3980 |
| 3.0806 | 10.0 | 3414220 | 3.3095 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Jainil30/wav2vec2-base-csa-10-rev3 | 5e12be3f25d6a2fcff2683e04cbe32727195e492 | 2022-01-12T14:55:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Jainil30 | null | Jainil30/wav2vec2-base-csa-10-rev3 | 2 | null | transformers | 23,158 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-csa-10-rev3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-csa-10-rev3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5869
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 18.7934 | 25.0 | 200 | 3.5869 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Jeevesh8/DA-LF | 9cc47271d709ac03588ab2eb66a8743cf4b1be64 | 2021-11-12T10:02:01.000Z | [
"pytorch",
"longformer",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeevesh8 | null | Jeevesh8/DA-LF | 2 | null | transformers | 23,159 | Entry not found |
Jeevesh8/sMLM-256-LF | 7d56118efe8fe3d676f5ccfffe6d4e3ec33c05af | 2021-11-12T09:57:06.000Z | [
"pytorch",
"longformer",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeevesh8 | null | Jeevesh8/sMLM-256-LF | 2 | null | transformers | 23,160 | Entry not found |
Jeevesh8/sMLM-LF | 59439ddcbaeb99504e04685a08dce4d4d19f25fc | 2021-11-12T09:02:58.000Z | [
"pytorch",
"longformer",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeevesh8 | null | Jeevesh8/sMLM-LF | 2 | null | transformers | 23,161 | Entry not found |
Jeska/BertjeWDialDataALLQonly08 | c72a016efecb2f8a66d7b7eca20a72eca610bfbe | 2021-12-11T22:48:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly08 | 2 | null | transformers | 23,162 | Entry not found |
Jeska/BertjeWDialDataALLQonly128 | 7bd0ab5415020eb7408103010eeae50b017aa8ae | 2021-12-07T18:57:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataALLQonly128 | 2 | null | transformers | 23,163 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataALLQonly128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly128
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2326 | 1.0 | 871 | 2.1509 |
| 2.1375 | 2.0 | 1742 | 2.1089 |
| 2.0442 | 3.0 | 2613 | 2.0655 |
| 2.0116 | 4.0 | 3484 | 2.0433 |
| 1.9346 | 5.0 | 4355 | 2.0134 |
| 1.9056 | 6.0 | 5226 | 1.9956 |
| 1.8295 | 7.0 | 6097 | 2.0287 |
| 1.8204 | 8.0 | 6968 | 2.0173 |
| 1.7928 | 9.0 | 7839 | 2.0251 |
| 1.7357 | 10.0 | 8710 | 2.0148 |
| 1.7318 | 11.0 | 9581 | 1.9274 |
| 1.7311 | 12.0 | 10452 | 1.9314 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/BertjeWDialDataQA20k | 10c69c7ab9c4eff8366675cbd2d7f4fe45803478 | 2021-11-29T15:35:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Jeska | null | Jeska/BertjeWDialDataQA20k | 2 | null | transformers | 23,164 | ---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialDataQA20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataQA20k
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1713 | 1.0 | 1542 | 2.0098 |
| 2.0736 | 2.0 | 3084 | 1.9853 |
| 2.0543 | 3.0 | 4626 | 2.0134 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
LysandreJik/dummy-model | 288dc58c209692e41b0c177a0bed30cfd9c25f2c | 2021-06-30T17:38:18.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | LysandreJik | null | LysandreJik/dummy-model | 2 | null | transformers | 23,165 | Entry not found |
LysandreJik/local_dir_1 | d137267fc732600d7dd89603145bad8dc4b7a277 | 2021-09-06T19:43:31.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | LysandreJik | null | LysandreJik/local_dir_1 | 2 | null | transformers | 23,166 | Entry not found |
Jipski/gpt2-Flo-BasBoettcher-Chefkoch | 9758144b2ab061bd553e13f010688d1f9c34423b | 2021-12-06T21:45:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Jipski | null | Jipski/gpt2-Flo-BasBoettcher-Chefkoch | 2 | null | transformers | 23,167 | Entry not found |
Jipski/gpt2-FloSolo | 14f0da89815f4d29bee32fdb6f464c58abd15b2e | 2021-12-06T21:39:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Jipski | null | Jipski/gpt2-FloSolo | 2 | null | transformers | 23,168 | Entry not found |
JonatanGk/roberta-base-bne-finetuned-sqac | 4e9333ce737c6e93db0f1db7e061fee91cd7c7ca | 2021-10-21T21:06:47.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:sqac",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | JonatanGk | null | JonatanGk/roberta-base-bne-finetuned-sqac | 2 | 1 | transformers | 23,169 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: roberta-base-bne-finetuned-sqac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9924 | 1.0 | 1196 | 0.8670 |
| 0.474 | 2.0 | 2392 | 0.8923 |
| 0.1637 | 3.0 | 3588 | 1.2066 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
JonathanSum/dummy-model | 2952560bb043ffc13418c1d8a823a207deeaecf1 | 2021-07-31T17:14:35.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | JonathanSum | null | JonathanSum/dummy-model | 2 | null | transformers | 23,170 | Entry not found |
Jung/t5-large-finetuned | b54775482e553ef1593c3d6b2d79f1b9a4e3bbe9 | 2021-06-23T02:35:40.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jung | null | Jung/t5-large-finetuned | 2 | null | transformers | 23,171 | Entry not found |
Junmai/klue-roberta-large-copa-finetuned-v1 | 08836da8cd57b73066034775b65630261de1992a | 2021-12-08T06:02:06.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | Junmai | null | Junmai/klue-roberta-large-copa-finetuned-v1 | 2 | null | transformers | 23,172 | Entry not found |
Junmai/pretrained-klue-roberta-v1 | 23a7a88a644b5ba8247a5c7efdfbe260cf148405 | 2021-12-08T04:49:00.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | Junmai | null | Junmai/pretrained-klue-roberta-v1 | 2 | null | transformers | 23,173 | Entry not found |
Kaledmgo/DialoGPT-small-donajulia | 05cf431a81c5e359a804b942d7ef3c97154579db | 2021-09-01T02:05:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kaledmgo | null | Kaledmgo/DialoGPT-small-donajulia | 2 | null | transformers | 23,174 | ---
tags:
- conversational
---
# Dona Julia DialoGPT Model |
Kalindu/SinBerto | c892311d6c8a1ef7d9c81e871a62e9e064fe1224 | 2021-06-17T16:37:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"si",
"arxiv:1907.11692",
"transformers",
"SinBERTo",
"Sinhala",
"autotrain_compatible"
] | fill-mask | false | Kalindu | null | Kalindu/SinBerto | 2 | null | transformers | 23,175 | ---
language: si
tags:
- SinBERTo
- Sinhala
- roberta
---
### Overview
SinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages.
### Model Specifications.
model : [Roberta](https://arxiv.org/abs/1907.11692)
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1
### How to use from the Transformers Library
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Kalindu/SinBerto")
model = AutoModelForMaskedLM.from_pretrained("Kalindu/SinBerto")
### OR Clone the model repo
git lfs install
git clone https://huggingface.co/Kalindu/SinBerto |
KekLord/DialoGPT-small-rick3 | 8222541f0bc65bf5c08cc69737d19a819dc2373e | 2021-11-02T06:00:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KekLord | null | KekLord/DialoGPT-small-rick3 | 2 | null | transformers | 23,176 | ---
tags:
- conversational
---
# Rick3 DialoGPT Model |
KheireddineDaouadi/SIMCSEARA | add7207d6e857e40a94e40aa40ad0b4fd19d0f43 | 2022-02-14T22:38:53.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | KheireddineDaouadi | null | KheireddineDaouadi/SIMCSEARA | 2 | null | transformers | 23,177 | Entry not found |
Kshaunish/DialoGPT-small-rick | 0fe7c5853833b89bbd67501efdcf0890b9f3c9f1 | 2021-08-31T10:40:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kshaunish | null | Kshaunish/DialoGPT-small-rick | 2 | null | transformers | 23,178 | ---
tags:
- conversational
---
# Rick Sanchez DialoGPT Model |
Kush/DialoGPT-small-harrypotter | 074ad2c75ed96f7d53af5155f415eee44b2cccb7 | 2021-10-17T12:56:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kush | null | Kush/DialoGPT-small-harrypotter | 2 | null | transformers | 23,179 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Kyuyoung11/haremotions-v4 | f6655579c93e075fa666c31efd6dbe75a56691e5 | 2021-08-15T06:05:41.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | Kyuyoung11 | null | Kyuyoung11/haremotions-v4 | 2 | null | transformers | 23,180 | Entry not found |
Lara/opus-mt-en-de-finetuned-en-to-de | cc0193314bd7d174956bb177820de0a418a7f7d6 | 2021-10-31T21:33:03.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Lara | null | Lara/opus-mt-en-de-finetuned-en-to-de | 2 | null | transformers | 23,181 | Entry not found |
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7_part2 | 1bd6f900ef4a9ff43258302b78b1c5af480898a2 | 2022-02-08T07:22:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7_part2 | 2 | null | transformers | 23,182 | Entry not found |
LegolasTheElf/Wav2Vec2_xls_r_openslr_Hi_V2 | 3f3402b2d657a5db1864d69451679faa63413fdb | 2022-02-04T07:53:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"transformers",
"Harveenchadha/indic-voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2Vec2_xls_r_openslr_Hi_V2 | 2 | null | transformers | 23,183 | ---
license: apache-2.0
language:
- hi
tags:
- automatic-speech-recognition
- Harveenchadha/indic-voice
- generated_from_trainer
model-index:
- name: Wav2Vec2_xls_r_openslr_Hi_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_openslr_Hi_V2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Harveenchadha/indic-voice](https://huggingface.co/datasets/Harveenchadha/indic-voice) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3184
- Wer: 0.3104
- Cer: 0.0958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:------:|:---------------:|:------:|
| 7.1097 | 0.48 | 300 | 0.9965 | 3.3989 | 1.0 |
| 3.0235 | 0.96 | 600 | 0.3163 | 1.3183 | 0.7977 |
| 1.1419 | 1.44 | 900 | 0.1913 | 0.6416 | 0.5543 |
| 0.8242 | 1.92 | 1200 | 0.1608 | 0.5063 | 0.4804 |
| 0.6876 | 2.56 | 1600 | 0.1387 | 0.4401 | 0.4280 |
| 0.5868 | 3.21 | 2000 | 0.1249 | 0.3940 | 0.3907 |
| 0.5285 | 3.85 | 2400 | 0.1200 | 0.3661 | 0.3763 |
| 0.5 | 4.49 | 2800 | 0.3528 | 0.3610 | 0.1136 |
| 0.4538 | 5.13 | 3200 | 0.3403 | 0.3485 | 0.1086 |
| 0.4165 | 5.77 | 3600 | 0.3335 | 0.3439 | 0.1062 |
| 0.3989 | 6.41 | 4000 | 0.3264 | 0.3340 | 0.1036 |
| 0.3679 | 7.05 | 4400 | 0.3256 | 0.3287 | 0.1013 |
| 0.3517 | 7.69 | 4800 | 0.3212 | 0.3223 | 0.1002 |
| 0.3357 | 8.33 | 5200 | 0.3173 | 0.3196 | 0.0986 |
| 0.3225 | 8.97 | 5600 | 0.3142 | 0.3177 | 0.0985 |
| 0.3057 | 9.62 | 6000 | 0.3199 | 0.3156 | 0.0975 |
| 0.2972 | 10.26 | 6400 | 0.3139 | 0.3128 | 0.0967 |
| 0.2881 | 10.9 | 6800 | 0.3184 | 0.3107 | 0.0957 |
| 0.2791 | 11.54 | 7200 | 0.3184 | 0.3104 | 0.0958 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Leisa/marian-finetuned-kde4-en-to-fr-accelerate | 21fddb4fcac181c27db0876a21091fa65e7ab307 | 2021-11-21T07:07:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Leisa | null | Leisa/marian-finetuned-kde4-en-to-fr-accelerate | 2 | null | transformers | 23,184 | Entry not found |
Leisa/marian-finetuned-kde4-en-to-fr | 6dcac6c22fdb5f58747f4d3a3b74d8b8358126bb | 2021-11-21T05:25:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | Leisa | null | Leisa/marian-finetuned-kde4-en-to-fr | 2 | null | transformers | 23,185 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.94538305859332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Bleu: 52.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
LeoCordoba/beto2beto | b88c1ce17ce488d2780d8a3366039156ec7d97ea | 2021-09-08T16:31:21.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"es",
"dataset:LeoCordoba/CC-NEWS-ES",
"transformers",
"text-generation",
"spanish",
"beto",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text-generation | false | LeoCordoba | null | LeoCordoba/beto2beto | 2 | null | transformers | 23,186 | ---
language: es
tags:
- text-generation
- spanish
- encoder-decoder
- beto
license: apache-2.0
datasets:
- LeoCordoba/CC-NEWS-ES
model-index:
- name: beto2beto
---
## beto2beto
Usage example here: https://colab.research.google.com/drive/18a2ZfF1e_Kyyydlv8INQIkJbv294xcAm?usp=sharing
Entrenado por 3 epochs sobre CC-NEWS-ES (2019), aproximadamente 68.000 steps. Encoder max length: 40•Decoder max length: 128
## Hyperparameters
## Usage
## Results
| key | value |
| --- | ----- |
| test_loss | 2.65148806571960452 |
|
Leostronkest/DialoGPT-small-michael | f81beb6097039ac3a925da96c436ccff064e42c5 | 2022-02-14T23:39:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Leostronkest | null | Leostronkest/DialoGPT-small-michael | 2 | null | transformers | 23,187 | ---
tags:
- conversational
---
# Michael DialoGPT Model |
Li/roberta-base-squad2 | 58f0e7bb52d2163ed10244f131d5d0bf486e42a4 | 2021-09-26T04:58:13.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Li | null | Li/roberta-base-squad2 | 2 | null | transformers | 23,188 | [roberta-base](https://huggingface.co/roberta-base) fine-tuned on the [SQuAD2](https://rajpurkar.github.io/SQuAD-explorer) dataset for 2 epochs.
The fine-tuning process was performed on a single NVIDIA Tesla T4 GPU (15GB). The hyperparameters are:
```
max_seq_length=512
per_device_train_batch_size=8
gradient_accumulation_steps=4
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
```
"eval_exact": 80.33352985766024,
"eval_f1": 83.38322909593009,
"eval_HasAns_exact": 77.81713900134953,
"eval_HasAns_f1": 83.925283241562,
"eval_HasAns_total": 5928,
"eval_NoAns_exact": 82.84272497897393,
"eval_NoAns_f1": 82.84272497897393,
"eval_NoAns_total": 5945,
"eval_best_exact": 80.33352985766024,
"eval_best_exact_thresh": 0.0,
"eval_best_f1": 83.38322909593005,
"eval_best_f1_thresh": 0.0,
"eval_samples": 11955,
"eval_total": 11873,
```
## More information
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. (https://rajpurkar.github.io/SQuAD-explorer/) |
LiqiangXiao/ConvSearch_QU | a1eb64901d799990a62a27019b839a2408e3a0dd | 2022-01-20T06:32:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:2109.05460",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | LiqiangXiao | null | LiqiangXiao/ConvSearch_QU | 2 | 4 | transformers | 23,189 | ## End-to-end Conversational search model
A end-to-end system of conversational search system for online shopping. It was introduced in [this paper](https://arxiv.org/abs/2109.05460) published on conference EMNLP.
## Model description
ConvSearch is an end-to-end conversational search system that deeply combines the dialog and search system to improve the search performance. In particular, the Product Search module leverages both structured product attributes and unstructured product text (e.g. profile), where the product text may contain phrases matching with utterances when schema is incomplete or when a product attribute value is missing. Putting together, our system has the advantage of both reduced error accumulation along individual modules, and enhanced robustness against product schema/knowledge gaps.
## Intended uses & limitations
You can use the raw model to understand the dialog between consumer and server. The concatenated dialogs can be parsed into intents (e.g. inform, request, buy, et al.) and attributes of products.
You can also fine-tune this model on similar down-stream tasks, such as a dialog system for shopping in your scenario or customer service system. Since our model is seq-to-seq, any dialog system that can be reformed to seq-to-seq can be implemented based on this model.
## How to use
You can use this model directly with:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/ConvSearch_QU")
model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/ConvSearch_QU")
## Training data
ConvSearch was pretrained on a dialog corpus with 49,999 dialogs/942,766 turns.
|
LucasS/bertLargeABSA | 794c6fcb52378513ab5825acf62522cf2a257fb8 | 2021-09-02T19:53:31.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | LucasS | null | LucasS/bertLargeABSA | 2 | null | transformers | 23,190 | Entry not found |
Lurka/DialoGPT-medium-kon | 31a5c99e2627b77862879eefd09f952a07777d45 | 2021-10-07T14:27:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Lurka | null | Lurka/DialoGPT-medium-kon | 2 | null | transformers | 23,191 | ---
tags:
- conversational
---
# Yui DialoGPT Model |
Luxiere/DialoGPT-medium-tyrion | c517b785a7107e52e18b2e375de7ded4a554e42d | 2021-10-20T17:05:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Luxiere | null | Luxiere/DialoGPT-medium-tyrion | 2 | null | transformers | 23,192 | ---
tags:
- conversational
---
# Tyrion DialoGPT Model |
MM98/ft-bz | 732d368f43eb78160231edda7e5ca3a99f3a9478 | 2022-01-05T17:34:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | MM98 | null | MM98/ft-bz | 2 | null | transformers | 23,193 | Entry not found |
MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es | 810a1e6f6616930d8e36d48635958a67f28bc6df | 2021-12-22T13:11:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"es",
"dataset:squad_es",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | MMG | null | MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es | 2 | null | transformers | 23,194 | ---
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-squad2-es
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-squad2-es
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2841
{'exact': 62.53162421993591, 'f1': 69.33421368741254}
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
MYX4567/distilbert-base-uncased-finetuned-squad | 4595ea48a20995baf6439e07546c1281e02b6878 | 2021-07-28T08:07:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | MYX4567 | null | MYX4567/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,195 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model_index:
- name: distilbert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2177 | 1.0 | 5533 | 1.1565 |
| 0.9472 | 2.0 | 11066 | 1.1174 |
| 0.7634 | 3.0 | 16599 | 1.1520 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
MaggieXM/deberta-base-finetuned-squad | 6b69c4e16675e02dafc757785ca06411f5c72655 | 2022-02-04T09:41:38.000Z | [
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | MaggieXM | null | MaggieXM/deberta-base-finetuned-squad | 2 | null | transformers | 23,196 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: deberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.0001
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.0 | 2 | 5.3843 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MagmaCubes1133/DialoGPT-large-rick | b0c6dfd9609cb452d7dd8b919f2904b1935dafd0 | 2021-10-04T16:56:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MagmaCubes1133 | null | MagmaCubes1133/DialoGPT-large-rick | 2 | null | transformers | 23,197 | ---
tags:
conversational
---
#Rick Sanchez DialoGPT Model |
Mahalakshmi/wav2vec2-large-xlsr-53-demo-colab | 630cbab60bf1e4bd68c19f84c64c94fd36d12b28 | 2022-03-24T11:53:08.000Z | [
"pytorch",
"ne",
"dataset:openslr",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Mahalakshmi | null | Mahalakshmi/wav2vec2-large-xlsr-53-demo-colab | 2 | null | null | 23,198 | ---
language:
- ne
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- hf-asr-leaderboard
datasets:
- openslr
model-index:
- name: wav2vec2-large-xlsr-53-tamil
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: openslr
type: openslr
args: ne
metrics:
- name: Test WER
type: wer
value: 25.02
---
#xlsr-large-53-tamil |
Mahalakshmi/wav2vec2-xls-r-300m-demo-colab | ebd6415b568196ef320db1174b67feac0229753a | 2022-02-06T13:51:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Mahalakshmi | null | Mahalakshmi/wav2vec2-xls-r-300m-demo-colab | 2 | null | transformers | 23,199 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9475
- eval_wer: 1.0377
- eval_runtime: 70.5646
- eval_samples_per_second: 25.239
- eval_steps_per_second: 3.16
- epoch: 21.05
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.