modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
β | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
β | likes
float64 0
712
β | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-5_female-5_s621 | 5560a12b8018150fcf1747af564597c32e335e1c | 2022-07-25T16:34:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-5_female-5_s621 | 1 | null | transformers | 33,400 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-5_female-5_s621
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-5_female-5_s73 | e90dcea93b5e8461bad59b54c78c3c1225c7029d | 2022-07-25T16:39:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-5_female-5_s73 | 1 | null | transformers | 33,401 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-5_female-5_s73
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-0_female-10_s727 | defa3687a30074b0a29c17cd38e7deb6bb95c6d5 | 2022-07-25T16:44:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-0_female-10_s727 | 1 | null | transformers | 33,402 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-0_female-10_s727
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-0_female-10_s834 | 61612858f87cddd5478953b55e69c13bd0379257 | 2022-07-25T16:48:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-0_female-10_s834 | 1 | null | transformers | 33,403 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-0_female-10_s834
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-0_female-10_s895 | bdacad97de611b27cbab011bf873c1a375914270 | 2022-07-25T16:54:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-0_female-10_s895 | 1 | null | transformers | 33,404 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-0_female-10_s895
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-10_female-0_s287 | bf6882784927ccfa455f4fc0f7d6ab127061f4cb | 2022-07-25T16:59:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-10_female-0_s287 | 1 | null | transformers | 33,405 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-10_female-0_s287
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-10_female-0_s682 | 1091641ad16638cc8f55510dab0e8d95976b3887 | 2022-07-25T17:03:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-10_female-0_s682 | 1 | null | transformers | 33,406 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-10_female-0_s682
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-10_female-0_s770 | 173c509f306f118949d766188a029af1a950ac65 | 2022-07-25T17:08:26.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-10_female-0_s770 | 1 | null | transformers | 33,407 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-10_female-0_s770
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-2_female-8_s201 | db90760c563c8ee9f5b9e896c9f28b7acf89f6bf | 2022-07-25T17:13:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-2_female-8_s201 | 1 | null | transformers | 33,408 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-2_female-8_s201
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-2_female-8_s303 | bfeb624426f79d4ff92364ec454b51258c782b18 | 2022-07-25T17:17:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-2_female-8_s303 | 1 | null | transformers | 33,409 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-2_female-8_s303
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-2_female-8_s911 | 57fbab6bc1411622af6e16aa1a0a61d07127e3a9 | 2022-07-25T17:22:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-2_female-8_s911 | 1 | null | transformers | 33,410 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-2_female-8_s911
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-8_female-2_s26 | 57195b800d8ddbee91b9c07df9b9530ae796065c | 2022-07-25T17:27:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-8_female-2_s26 | 1 | null | transformers | 33,411 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-8_female-2_s26
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-8_female-2_s322 | 04b5c8bbba0b57e8bf056b5571b2eadf53249568 | 2022-07-25T17:32:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-8_female-2_s322 | 1 | null | transformers | 33,412 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-8_female-2_s322
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-8_female-2_s570 | 5cabdd2b7124839f2b93c8840a3eb30e340343c1 | 2022-07-25T17:37:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-8_female-2_s570 | 1 | null | transformers | 33,413 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_xls-r_gender_male-8_female-2_s570
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s240 | fa52a3c1984ac8ed2536a8856d29be371d9ecc51 | 2022-07-25T17:42:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s240 | 1 | null | transformers | 33,414 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s240
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s362 | e9303cc2416051c1ea3484077649dc5952d2bcb3 | 2022-07-25T17:47:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s362 | 1 | null | transformers | 33,415 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s362
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s463 | 3bcce7028c49b8a56228090b6ae785292cf9bb10 | 2022-07-25T17:52:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s463 | 1 | null | transformers | 33,416 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s463
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s157 | b11e90494c2141925aa1a61fab0d242affc7ae88 | 2022-07-25T17:57:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s157 | 1 | null | transformers | 33,417 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s157
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s265 | 100573155c5e2750b1db793149fbd3d79a26db24 | 2022-07-25T18:02:01.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s265 | 1 | null | transformers | 33,418 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s265
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s888 | 2079d073fa27971d665d88ab9488f494da5e2cb9 | 2022-07-25T18:07:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s888 | 1 | null | transformers | 33,419 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s888
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s61 | e52c511d85cd6aa167ecfb2718966a24c558b443 | 2022-07-25T18:12:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s61 | 1 | null | transformers | 33,420 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s61
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s632 | 1ea5945ebfa7c68ee4200dec275ec3683fc3ea30 | 2022-07-25T18:17:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s632 | 1 | null | transformers | 33,421 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s632
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s885 | 5aa5f342a9d12ef10a2bfd8263ebea31bd3a77ab | 2022-07-25T18:22:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s885 | 1 | null | transformers | 33,422 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s885
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s443 | 3d70df3634442c6f2bec88fb09fa96e718bbba7d | 2022-07-25T18:27:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s443 | 1 | null | transformers | 33,423 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s443
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s598 | 64ad70b8064cf1846474c01c144e5903f7d0e58e | 2022-07-25T18:32:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s598 | 1 | null | transformers | 33,424 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s598
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
naem1023/kcelectra-phrase-clause-classification-aug-personal | 773f5d4f0852c94f9679911a69c4416715a203bf | 2022-07-25T23:55:45.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | naem1023 | null | naem1023/kcelectra-phrase-clause-classification-aug-personal | 1 | null | transformers | 33,425 | ---
license: apache-2.0
---
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s82 | 78e4d42aabcefeda26db034f80ceab99e65fdbcd | 2022-07-25T18:38:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s82 | 1 | null | transformers | 33,426 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s82
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s187 | e77834f85bbff179328b1966814a0fdd05d256e9 | 2022-07-25T18:43:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s187 | 1 | null | transformers | 33,427 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s187
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s507 | 26067d05227b68830ba44a8b2495319f5cbeb2b3 | 2022-07-25T18:48:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s507 | 1 | null | transformers | 33,428 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s507
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s571 | 379cd208ae1a270f8cc01e62ff5919b294ddcdbc | 2022-07-25T18:53:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s571 | 1 | null | transformers | 33,429 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-8_nortepeninsular-2_s571
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
ai4bharat/indicwav2vec_v1_gujarati | ce9630d2f0aa34940983515b28787d918470d130 | 2022-07-25T19:06:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | ai4bharat | null | ai4bharat/indicwav2vec_v1_gujarati | 1 | null | transformers | 33,430 | Entry not found |
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-5_female-5_s263 | ed5e874a2254b321f206042127e039e145c8daa2 | 2022-07-25T18:57:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-5_female-5_s263 | 1 | null | transformers | 33,431 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-5_female-5_s263
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
ai4bharat/indicwav2vec_v1_bengali | d5fd90a4038c2d808b52ea20d68c281148158734 | 2022-07-25T19:02:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"license:mit"
] | automatic-speech-recognition | false | ai4bharat | null | ai4bharat/indicwav2vec_v1_bengali | 1 | null | transformers | 33,432 | ---
license: mit
---
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-5_female-5_s294 | 87da302adf21d01babc65215b1a0f5ba9d4952d4 | 2022-07-25T19:02:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-5_female-5_s294 | 1 | null | transformers | 33,433 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-5_female-5_s294
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-5_female-5_s932 | 72d6ecbd9389f8ba919cc50049fa466ff7c06d1d | 2022-07-25T19:07:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-5_female-5_s932 | 1 | null | transformers | 33,434 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-5_female-5_s932
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-0_female-10_s695 | fd98aadb05ee535a9933f5691957db14114832c1 | 2022-07-25T19:13:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-0_female-10_s695 | 1 | null | transformers | 33,435 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-0_female-10_s695
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-0_female-10_s951 | 469b0bc727076e139027a0795cfedb1459548fbd | 2022-07-25T19:17:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-0_female-10_s951 | 1 | null | transformers | 33,436 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-0_female-10_s951
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-0_female-10_s961 | 34bd2f5a6456e8fd8c617a1877a2e9abef4a5b89 | 2022-07-25T19:22:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-0_female-10_s961 | 1 | null | transformers | 33,437 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-0_female-10_s961
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-10_female-0_s109 | 7b0020455ef7c99303c0b8b5b90ca99b8a38466c | 2022-07-25T19:27:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-10_female-0_s109 | 1 | null | transformers | 33,438 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-10_female-0_s109
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-10_female-0_s530 | 0f52580e185482cdc966ca702f2e9e9a0c0224b8 | 2022-07-25T19:32:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-10_female-0_s530 | 1 | null | transformers | 33,439 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-10_female-0_s530
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-10_female-0_s840 | ee7e28ef949dea09bf8eb403b3624313a9481437 | 2022-07-25T19:36:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-10_female-0_s840 | 1 | null | transformers | 33,440 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-10_female-0_s840
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-2_female-8_s182 | 9b5cc8563659a4a6e5126bae658bec405aaa3727 | 2022-07-25T19:41:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-2_female-8_s182 | 1 | null | transformers | 33,441 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-2_female-8_s182
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-2_female-8_s772 | e6b1a06567081c21a379637220e0abcf4cc7f6ac | 2022-07-25T19:46:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-2_female-8_s772 | 1 | null | transformers | 33,442 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-2_female-8_s772
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-2_female-8_s786 | e1fc2149bcd0e08ec6ba947cccbc7360c43084a5 | 2022-07-25T19:51:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-2_female-8_s786 | 1 | null | transformers | 33,443 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-2_female-8_s786
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-8_female-2_s235 | 79bc6171609d496ffe0c60dc61cfec8b0486d0c3 | 2022-07-25T19:56:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-8_female-2_s235 | 1 | null | transformers | 33,444 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-8_female-2_s235
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-8_female-2_s287 | 42615aba37ba6ed9caed4654dcda738f43f1a9b2 | 2022-07-25T20:01:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-8_female-2_s287 | 1 | null | transformers | 33,445 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-8_female-2_s287
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-8_female-2_s471 | 3eda8f7e8661cf6acd92ac346dcf62aa8111cd70 | 2022-07-25T20:13:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-8_female-2_s471 | 1 | null | transformers | 33,446 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_gender_male-8_female-2_s471
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s42 | c39704d7a7db04683f545cbf117feb7649f97191 | 2022-07-25T20:19:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s42 | 1 | null | transformers | 33,447 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s42
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s452 | 6e56f02408aab18446447548a97e4435bfb93353 | 2022-07-25T20:25:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s452 | 1 | null | transformers | 33,448 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s452
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
ultra-coder54732/xlnet-prop-16-train-set | bc8fc317b4c13201b87889119ccdf07b0cf35e0f | 2022-07-25T22:48:19.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ultra-coder54732 | null | ultra-coder54732/xlnet-prop-16-train-set | 1 | null | transformers | 33,449 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-prop-16-train-set
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cpu
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s941 | e04eceab7d83efba60c186a946cb20432441cf21 | 2022-07-25T20:30:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s941 | 1 | null | transformers | 33,450 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s941
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s198 | 2378b0f443c7206a23eab8f7db4b2125d2026a2b | 2022-07-25T20:35:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s198 | 1 | null | transformers | 33,451 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s198
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s376 | 1e1398c2faef881a2c15c4ba2d817a7423728fbb | 2022-07-25T20:40:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s376 | 1 | null | transformers | 33,452 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s376
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s513 | ea38ae070f7947f826fed13d454263cb8ec32d82 | 2022-07-25T20:45:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s513 | 1 | null | transformers | 33,453 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s513
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s350 | f3b536b8f44b243c4ea69d067e3b796e36430a74 | 2022-07-25T20:50:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s350 | 1 | null | transformers | 33,454 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s350
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s381 | 698ebb69bd2a2762e0c6422fddef38f114159db1 | 2022-07-25T20:55:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s381 | 1 | null | transformers | 33,455 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s381
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s673 | 002c0f61c71f027430c90cb58510766c7d7e0e25 | 2022-07-25T21:00:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s673 | 1 | null | transformers | 33,456 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s673
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s55 | d0d1aacaf083a3b81e96da72aff1501fa4884c10 | 2022-07-25T21:04:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s55 | 1 | null | transformers | 33,457 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s55
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s587 | 5c4dc8b7a6fd927949cf3fec626ecc6ba4a1f9ce | 2022-07-25T21:09:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s587 | 1 | null | transformers | 33,458 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s587
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s729 | 4b21d3fd6bef3bbdcbd16c6a3cf992520992a497 | 2022-07-25T21:14:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s729 | 1 | null | transformers | 33,459 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s729
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s368 | b2d3d4bd84712870243e3e9d0c7fc3133f928414 | 2022-07-25T21:19:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s368 | 1 | null | transformers | 33,460 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s368
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s458 | 9a7868b2031b3d5a2ae15e5718221f4dd23441c9 | 2022-07-25T21:23:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s458 | 1 | null | transformers | 33,461 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s458
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s543 | 614fc35cdb3f81eda935aa128cf1017697de7ca2 | 2022-07-25T21:28:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s543 | 1 | null | transformers | 33,462 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s543
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-5_female-5_s286 | 3161a24319caa24b02e37f2cfc9f8868e760d1d2 | 2022-07-25T21:33:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-5_female-5_s286 | 1 | null | transformers | 33,463 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-5_female-5_s286
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-5_female-5_s779 | b2846d0b4fecf0b61e29ed28ae42b1bee2033a4c | 2022-07-25T21:38:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-5_female-5_s779 | 1 | null | transformers | 33,464 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-5_female-5_s779
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-5_female-5_s916 | 4bb68c6bf15df9c915d4ab6a783e0a4ecef322c1 | 2022-07-25T21:42:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-5_female-5_s916 | 1 | null | transformers | 33,465 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-5_female-5_s916
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-0_female-10_s412 | 0dbbb47feb87457190c8271ee1b73e9a0df8d543 | 2022-07-25T21:47:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-0_female-10_s412 | 1 | null | transformers | 33,466 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-0_female-10_s412
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-0_female-10_s534 | 5511eeb288565736a9d52cfd02acb837675c1c81 | 2022-07-25T21:52:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-0_female-10_s534 | 1 | null | transformers | 33,467 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-0_female-10_s534
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-0_female-10_s895 | 94896de9273c5fbb2b1558bd574fdebade3c7d5e | 2022-07-25T21:57:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-0_female-10_s895 | 1 | null | transformers | 33,468 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-0_female-10_s895
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s559 | 22285ef79188e30f63c05faf4e3ecad5ee7f3e93 | 2022-07-25T22:02:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s559 | 1 | null | transformers | 33,469 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-10_female-0_s559
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s577 | bd7f5b1ab9ae218d29f3cb13c34994faf1069f72 | 2022-07-25T22:07:24.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s577 | 1 | null | transformers | 33,470 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-10_female-0_s577
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s825 | 23d3ae5175efc492158d1ade831256eb742975d7 | 2022-07-25T22:12:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s825 | 1 | null | transformers | 33,471 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-10_female-0_s825
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s295 | 9547e0c9b0528a1bf71b3770dbcb0445c527524c | 2022-07-25T22:17:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s295 | 1 | null | transformers | 33,472 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-2_female-8_s295
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s728 | 32f77e573660afd049724a6a6f5a6b93b698b672 | 2022-07-25T22:21:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s728 | 1 | null | transformers | 33,473 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-2_female-8_s728
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s886 | cda6ab322349b88e642d44d95c22a9c43f5a5951 | 2022-07-25T22:26:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s886 | 1 | null | transformers | 33,474 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-2_female-8_s886
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-8_female-2_s277 | ce27ed71caea8e18e8cc203f2ff905370a0a92eb | 2022-07-25T22:31:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-8_female-2_s277 | 1 | null | transformers | 33,475 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-8_female-2_s277
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-8_female-2_s659 | 9c5a40a07798896d4ef436a94159b614f9792287 | 2022-07-25T22:36:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-8_female-2_s659 | 1 | null | transformers | 33,476 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-8_female-2_s659
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
nakamura196/trocr-small-ndl | 8dd0d2c4616e1212cedff5c147bfee6529b04914 | 2022-07-25T23:28:24.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | nakamura196 | null | nakamura196/trocr-small-ndl | 1 | null | transformers | 33,477 | Entry not found |
fujiki/t5-efficient-xl-ja_train4 | eac4f1d78afe5ed1e7e297ef3cac964bdf96368b | 2022-07-26T15:01:06.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | fujiki | null | fujiki/t5-efficient-xl-ja_train4 | 1 | null | transformers | 33,478 | Entry not found |
NimaBoscarino/July25Test | a26824db8aa951e7535116b5ed54e21e9d4ad9e7 | 2022-07-26T03:00:21.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | NimaBoscarino | null | NimaBoscarino/July25Test | 1 | null | sentence-transformers | 33,479 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# NimaBoscarino/July25Test
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('NimaBoscarino/July25Test')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('NimaBoscarino/July25Test')
model = AutoModel.from_pretrained('NimaBoscarino/July25Test')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=NimaBoscarino/July25Test)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ultra-coder54732/distilbert-prop-16-train-set | cfa8ea3e27e6644bbf9d0d731b4e429fed6ee79a | 2022-07-27T00:33:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ultra-coder54732 | null | ultra-coder54732/distilbert-prop-16-train-set | 1 | null | transformers | 33,480 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-prop-16-train-set
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rajat99/Fine_Tuning_XLSR_300M_testing_6_model | 6a09b609df44569f006752f4a8c12a6d9f8cfa9c | 2022-07-26T07:16:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | rajat99 | null | rajat99/Fine_Tuning_XLSR_300M_testing_6_model | 1 | null | transformers | 33,481 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Fine_Tuning_XLSR_300M_testing_6_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_Tuning_XLSR_300M_testing_6_model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2263
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.466 | 23.53 | 400 | 3.2263 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
obokkkk/bert-base-multilingual-cased-finetuned | 70b09fad47b31fc30513699649b038cf9ac06eab | 2022-07-26T08:07:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | obokkkk | null | obokkkk/bert-base-multilingual-cased-finetuned | 1 | null | transformers | 33,482 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 36
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
Frikallo/elonmusk | 06f5f90f8c42e011fcaf0e700ab92a386618154a | 2022-07-26T07:37:18.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Frikallo | null | Frikallo/elonmusk | 1 | null | transformers | 33,483 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: elonmusk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# elonmusk
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 2483812281
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Kushala/wav2vec2-base-timit-demo-google-colab | 3a4e2da388d5ba74a172bee6676a5c642136a717 | 2022-07-26T10:07:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Kushala | null | Kushala/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 33,484 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5195
- Wer: 0.3386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5345 | 1.0 | 500 | 2.1466 | 1.0010 |
| 0.949 | 2.01 | 1000 | 0.5687 | 0.5492 |
| 0.445 | 3.01 | 1500 | 0.4562 | 0.4717 |
| 0.2998 | 4.02 | 2000 | 0.4154 | 0.4401 |
| 0.2242 | 5.02 | 2500 | 0.3887 | 0.4034 |
| 0.1834 | 6.02 | 3000 | 0.4262 | 0.3905 |
| 0.1573 | 7.03 | 3500 | 0.4200 | 0.3927 |
| 0.1431 | 8.03 | 4000 | 0.4194 | 0.3869 |
| 0.1205 | 9.04 | 4500 | 0.4600 | 0.3912 |
| 0.1082 | 10.04 | 5000 | 0.4613 | 0.3776 |
| 0.0984 | 11.04 | 5500 | 0.4926 | 0.3860 |
| 0.0872 | 12.05 | 6000 | 0.4869 | 0.3780 |
| 0.0826 | 13.05 | 6500 | 0.5033 | 0.3690 |
| 0.0717 | 14.06 | 7000 | 0.4827 | 0.3791 |
| 0.0658 | 15.06 | 7500 | 0.4816 | 0.3650 |
| 0.0579 | 16.06 | 8000 | 0.5433 | 0.3689 |
| 0.056 | 17.07 | 8500 | 0.5513 | 0.3672 |
| 0.0579 | 18.07 | 9000 | 0.4813 | 0.3632 |
| 0.0461 | 19.08 | 9500 | 0.4846 | 0.3501 |
| 0.0431 | 20.08 | 10000 | 0.5449 | 0.3637 |
| 0.043 | 21.08 | 10500 | 0.4906 | 0.3538 |
| 0.0334 | 22.09 | 11000 | 0.5081 | 0.3477 |
| 0.0322 | 23.09 | 11500 | 0.5184 | 0.3439 |
| 0.0316 | 24.1 | 12000 | 0.5412 | 0.3450 |
| 0.0262 | 25.1 | 12500 | 0.5113 | 0.3425 |
| 0.0267 | 26.1 | 13000 | 0.4888 | 0.3414 |
| 0.0258 | 27.11 | 13500 | 0.5071 | 0.3371 |
| 0.0226 | 28.11 | 14000 | 0.5311 | 0.3380 |
| 0.0233 | 29.12 | 14500 | 0.5195 | 0.3386 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
FAICAM/wav2vec2-base-timit-demo-google-colab | c6737330587df605e8164c635f2bc1e6d6aa040f | 2022-07-26T11:07:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | FAICAM | null | FAICAM/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 33,485 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5725
- Wer: 0.3413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.508 | 1.0 | 500 | 1.9315 | 0.9962 |
| 0.8832 | 2.01 | 1000 | 0.5552 | 0.5191 |
| 0.4381 | 3.01 | 1500 | 0.4451 | 0.4574 |
| 0.2983 | 4.02 | 2000 | 0.4096 | 0.4265 |
| 0.2232 | 5.02 | 2500 | 0.4280 | 0.4083 |
| 0.1811 | 6.02 | 3000 | 0.4307 | 0.3942 |
| 0.1548 | 7.03 | 3500 | 0.4453 | 0.3889 |
| 0.1367 | 8.03 | 4000 | 0.5043 | 0.4138 |
| 0.1238 | 9.04 | 4500 | 0.4530 | 0.3807 |
| 0.1072 | 10.04 | 5000 | 0.4435 | 0.3660 |
| 0.0978 | 11.04 | 5500 | 0.4739 | 0.3676 |
| 0.0887 | 12.05 | 6000 | 0.5052 | 0.3761 |
| 0.0813 | 13.05 | 6500 | 0.5098 | 0.3619 |
| 0.0741 | 14.06 | 7000 | 0.4666 | 0.3602 |
| 0.0654 | 15.06 | 7500 | 0.5642 | 0.3657 |
| 0.0589 | 16.06 | 8000 | 0.5489 | 0.3638 |
| 0.0559 | 17.07 | 8500 | 0.5260 | 0.3598 |
| 0.0562 | 18.07 | 9000 | 0.5250 | 0.3640 |
| 0.0448 | 19.08 | 9500 | 0.5215 | 0.3569 |
| 0.0436 | 20.08 | 10000 | 0.5117 | 0.3560 |
| 0.0412 | 21.08 | 10500 | 0.4910 | 0.3570 |
| 0.0336 | 22.09 | 11000 | 0.5221 | 0.3524 |
| 0.031 | 23.09 | 11500 | 0.5278 | 0.3480 |
| 0.0339 | 24.1 | 12000 | 0.5353 | 0.3486 |
| 0.0278 | 25.1 | 12500 | 0.5342 | 0.3462 |
| 0.0251 | 26.1 | 13000 | 0.5399 | 0.3439 |
| 0.0242 | 27.11 | 13500 | 0.5626 | 0.3431 |
| 0.0214 | 28.11 | 14000 | 0.5749 | 0.3408 |
| 0.0216 | 29.12 | 14500 | 0.5725 | 0.3413 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
nielsr/donut-proto | 8b4bfcdc09728efa70723978563979ba9a708e5a | 2022-07-26T09:33:41.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | nielsr | null | nielsr/donut-proto | 1 | null | transformers | 33,486 | Entry not found |
WENGSYX/Dagnosis_Chinese_CPT | 112f102f845c2dfccf16808f018796db636243e7 | 2022-07-26T09:54:15.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | WENGSYX | null | WENGSYX/Dagnosis_Chinese_CPT | 1 | null | transformers | 33,487 | ---
license: mit
---
|
WENGSYX/Medical_Report_Chinese_CPT | 78223d8f1ab3876b9b8720d97f03809385bb5b60 | 2022-07-26T16:42:44.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | WENGSYX | null | WENGSYX/Medical_Report_Chinese_CPT | 1 | null | transformers | 33,488 | ---
license: mit
---
|
NbAiLab/wav2vec2-1b-npsc-nst-bokmaal-repaired | 10d29b8aaf02cb064e23d30c001e8ba8cc59ad75 | 2022-07-30T09:26:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | NbAiLab | null | NbAiLab/wav2vec2-1b-npsc-nst-bokmaal-repaired | 1 | null | transformers | 33,489 | Entry not found |
SummerChiam/rust_image_classification_8 | 9379fd7ed72261c23619d651fd79079a12994fa8 | 2022-07-26T13:28:11.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/rust_image_classification_8 | 1 | null | transformers | 33,490 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_3
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9594936966896057
---
# rust_image_classification_3
Autogenerated by HuggingPicsπ€πΌοΈ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust
 |
domenicrosati/deberta-v3-large-finetuned-synthetic-translated-only | 55b85ce075cb135c74621bccab5ba930dee6b9cb | 2022-07-26T22:34:44.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-finetuned-synthetic-translated-only | 1 | null | transformers | 33,491 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: deberta-v3-large-finetuned-synthetic-translated-only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-synthetic-translated-only
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- F1: 0.9961
- Precision: 1.0
- Recall: 0.9922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:|
| 0.0065 | 1.0 | 10158 | 0.0022 | 0.9887 | 0.9962 | 0.9813 |
| 0.0006 | 2.0 | 20316 | 0.0030 | 0.9887 | 0.9962 | 0.9813 |
| 0.0008 | 3.0 | 30474 | 0.0029 | 0.9906 | 0.9962 | 0.9851 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/tojibaceo-tojibawhiteroom | 68cce5380f7c55da652bcdfb65b957961e3d6261 | 2022-07-26T15:55:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/tojibaceo-tojibawhiteroom | 1 | null | transformers | 33,492 | ---
language: en
thumbnail: http://www.huggingtweets.com/tojibaceo-tojibawhiteroom/1658850915163/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1508824472924659725/267f4Lkm_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509337156787003394/WjOdf_-m_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tojiba CPU Corp (π,π) & Tojiba White Room (T__T).1</div>
<div style="text-align: center; font-size: 14px;">@tojibaceo-tojibawhiteroom</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tojiba CPU Corp (π,π) & Tojiba White Room (T__T).1.
| Data | Tojiba CPU Corp (π,π) | Tojiba White Room (T__T).1 |
| --- | --- | --- |
| Tweets downloaded | 1489 | 624 |
| Retweets | 723 | 0 |
| Short tweets | 259 | 80 |
| Tweets kept | 507 | 544 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jak2xfb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tojibaceo-tojibawhiteroom's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/t112mifn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/t112mifn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tojibaceo-tojibawhiteroom')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
schnell/bert-small-juman-bpe | 5ab23c6caed1309f224140984fc3422981c4ed4a | 2022-07-29T15:15:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | schnell | null | schnell/bert-small-juman-bpe | 1 | null | transformers | 33,493 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-small-juman-bpe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-juman-bpe
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.6317
- Loss: 1.7829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 768
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 14
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 2.3892 | 1.0 | 69472 | 0.5637 | 2.2498 |
| 2.2219 | 2.0 | 138944 | 0.5873 | 2.0785 |
| 2.1453 | 3.0 | 208416 | 0.5984 | 2.0019 |
| 2.1 | 4.0 | 277888 | 0.6059 | 1.9531 |
| 2.068 | 5.0 | 347360 | 0.6106 | 1.9169 |
| 2.0405 | 6.0 | 416832 | 0.6146 | 1.8921 |
| 2.0174 | 7.0 | 486304 | 0.6175 | 1.8711 |
| 2.0002 | 8.0 | 555776 | 0.6205 | 1.8527 |
| 1.9838 | 9.0 | 625248 | 0.6225 | 1.8381 |
| 1.9691 | 10.0 | 694720 | 0.6248 | 1.8239 |
| 1.9551 | 11.0 | 764192 | 0.6265 | 1.8125 |
| 1.9406 | 12.0 | 833664 | 0.6288 | 1.8002 |
| 1.9293 | 13.0 | 903136 | 0.6310 | 1.7871 |
| 1.9247 | 14.0 | 972608 | 0.6317 | 1.7829 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.12.0+cu116
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/jockforbrains | c1ab58a8bdda06783f8a8e35dbe71b0f5eaf218d | 2022-07-26T16:25:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jockforbrains | 1 | null | transformers | 33,494 | ---
language: en
thumbnail: http://www.huggingtweets.com/jockforbrains/1658852709222/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1492447040193900546/LtTdjrY7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JockForBrains (β£οΈ May contain morphs)</div>
<div style="text-align: center; font-size: 14px;">@jockforbrains</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JockForBrains (β£οΈ May contain morphs).
| Data | JockForBrains (β£οΈ May contain morphs) |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 211 |
| Short tweets | 467 |
| Tweets kept | 2560 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jsjyesm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jockforbrains's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zi3c9sw9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zi3c9sw9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jockforbrains')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
carblacac/distilbert-base-uncased-finetuned-emotion | 33838e5f0a062c700303e104afcb24b0975e568e | 2022-07-27T18:28:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | carblacac | null | carblacac/distilbert-base-uncased-finetuned-emotion | 1 | null | transformers | 33,495 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9214820157277583
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2197
- Accuracy: 0.9215
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8026 | 1.0 | 250 | 0.3133 | 0.905 | 0.9022 |
| 0.2468 | 2.0 | 500 | 0.2197 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/bearfoothunter1-jockforbrains-recentrift | e24a77838a031bd29259dd1f4f9779fe42e1e086 | 2022-07-26T16:57:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/bearfoothunter1-jockforbrains-recentrift | 1 | null | transformers | 33,496 | ---
language: en
thumbnail: http://www.huggingtweets.com/bearfoothunter1-jockforbrains-recentrift/1658853737112/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1492447040193900546/LtTdjrY7_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1550974872502796289/7i5bgWY2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1015932356937560069/EJSUv5Uk_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JockForBrains (β£οΈ May contain morphs) & Demonic Executioner & the real bearfoothunter πΊπ¦πΊπ¦πΊπ¦</div>
<div style="text-align: center; font-size: 14px;">@bearfoothunter1-jockforbrains-recentrift</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JockForBrains (β£οΈ May contain morphs) & Demonic Executioner & the real bearfoothunter πΊπ¦πΊπ¦πΊπ¦.
| Data | JockForBrains (β£οΈ May contain morphs) | Demonic Executioner | the real bearfoothunter πΊπ¦πΊπ¦πΊπ¦ |
| --- | --- | --- | --- |
| Tweets downloaded | 3238 | 2261 | 3248 |
| Retweets | 211 | 177 | 64 |
| Short tweets | 467 | 104 | 746 |
| Tweets kept | 2560 | 1980 | 2438 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2susnztb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bearfoothunter1-jockforbrains-recentrift's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18fa8jhh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18fa8jhh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bearfoothunter1-jockforbrains-recentrift')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/surlaroute | ce5ea0fe4a6e21ba60d6a157bab803841d47714a | 2022-07-26T16:42:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/surlaroute | 1 | null | transformers | 33,497 | ---
language: en
thumbnail: http://www.huggingtweets.com/surlaroute/1658853747255/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1305228695444090882/aU_Vlnvg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Melody π§π»ββοΈ</div>
<div style="text-align: center; font-size: 14px;">@surlaroute</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Melody π§π»ββοΈ.
| Data | Melody π§π»ββοΈ |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 114 |
| Short tweets | 351 |
| Tweets kept | 2780 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/k1hti8dn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @surlaroute's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cffupuun) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cffupuun/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/surlaroute')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
phjhk/hklegal-xlm-r-large-t | 4e4df9796c30d2e572c1c2104030396d1b716c0f | 2022-07-29T14:50:13.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"transformers",
"autotrain_compatible"
] | fill-mask | false | phjhk | null | phjhk/hklegal-xlm-r-large-t | 1 | null | transformers | 33,498 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco GuzmΓ‘n, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
Hong Kong Legal Information Institute [HKILL](https://www.hklii.hk/eng/) is a free, independent, non-profit document database providing the public with legal information relating to Hong Kong. We finetune the XLM-RoBERTa on the HKILL datasets. It contains docments
# Uses
The model is a pretrained-finetuned language model. The model can be used for document classification, Named Entity Recognition (NER), especially on legal domain.
```python
>>> from transformers import pipeline,AutoTokenizer,AutoModelForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("hklegal-xlm-r-large-t")
>>> model = AutoModelForTokenClassification.from_pretrained("hklegal-xlm-r-large-t")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Alya told Jasmine that Andrew could pay with cash..")
```
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
``` |
huggingtweets/hiddenlure | 718830623afe76235171480170cd4dc345a863e2 | 2022-07-26T17:17:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/hiddenlure | 1 | null | transformers | 33,499 | ---
language: en
thumbnail: http://www.huggingtweets.com/hiddenlure/1658855843772/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1541995552505831424/K1gtBapk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Hidden</div>
<div style="text-align: center; font-size: 14px;">@hiddenlure</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Hidden.
| Data | Hidden |
| --- | --- |
| Tweets downloaded | 376 |
| Retweets | 96 |
| Short tweets | 24 |
| Tweets kept | 256 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/174g7le6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hiddenlure's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3oy7jn9e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3oy7jn9e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hiddenlure')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.