modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
d4niel92/distilbert-base-uncased-finetuned-emotion
8ba8777a06b417e15dbb36f7ab757b678066a333
2022-07-29T09:31:15.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
d4niel92
null
d4niel92/distilbert-base-uncased-finetuned-emotion
1
null
transformers
33,200
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9238434600787808 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2259 - Accuracy: 0.924 - F1: 0.9238 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8417 | 1.0 | 250 | 0.3291 | 0.9005 | 0.8962 | | 0.2551 | 2.0 | 500 | 0.2259 | 0.924 | 0.9238 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Konstantine4096/bart-pizza-50K
28c4265d6112fbabd76d4ec6fa951310bee439d9
2022-07-22T20:03:21.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Konstantine4096
null
Konstantine4096/bart-pizza-50K
1
null
transformers
33,201
Entry not found
Lvxue/distilled_test_0ddd
d380ff987824377e8cd62c08a3d03f45d716a37c
2022-07-28T07:04:50.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Lvxue
null
Lvxue/distilled_test_0ddd
1
null
transformers
33,202
lalala
rajistics/auditor-test
28533934ef99bfff13771fc67068dcf14184c0ea
2022-07-25T13:21:49.000Z
[ "pytorch", "bert", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
rajistics
null
rajistics/auditor-test
1
null
transformers
33,203
--- tags: - generated_from_trainer model-index: - name: auditor-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # auditor-test This model is a fine-tuned version of [demo-org/finbert-pretrain](https://huggingface.co/demo-org/finbert-pretrain) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
tmgondal/bert-finetuned-squad
c79cfe2e17552103523f24f263c60e3bd8e91332
2022-07-22T21:13:25.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
tmgondal
null
tmgondal/bert-finetuned-squad
1
null
transformers
33,204
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/deepleffen-falco-tsm_leffen
f8dc49051ea6dcb83c24341c13bc41ba24479010
2022-07-22T19:10:49.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/deepleffen-falco-tsm_leffen
1
null
transformers
33,205
--- language: en thumbnail: http://www.huggingtweets.com/deepleffen-falco-tsm_leffen/1658517045179/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1241879678455078914/e2EdZIrr_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1527824997388935168/-Ohf5n-I_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1547974425718300675/wvQuPBGR_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & nick & TSM FTX Leffen</div> <div style="text-align: center; font-size: 14px;">@deepleffen-falco-tsm_leffen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Deep Leffen Bot & nick & TSM FTX Leffen. | Data | Deep Leffen Bot | nick | TSM FTX Leffen | | --- | --- | --- | --- | | Tweets downloaded | 591 | 3249 | 3221 | | Retweets | 14 | 180 | 285 | | Short tweets | 27 | 582 | 282 | | Tweets kept | 550 | 2487 | 2654 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13ch35ln/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-falco-tsm_leffen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1pw6etfi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1pw6etfi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/deepleffen-falco-tsm_leffen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sudo-s/robot2
16460965298d72b44ac2c82d1e892c30aad8f86f
2022-07-23T00:49:29.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers" ]
image-classification
false
sudo-s
null
sudo-s/robot2
1
null
transformers
33,206
Entry not found
szj/distilbert-base-uncased-finetuned-cola
47eb283a2b3de7cd216282b44ae3198e540a4ab1
2022-07-26T08:27:47.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
szj
null
szj/distilbert-base-uncased-finetuned-cola
1
null
transformers
33,207
Entry not found
sudo-s/robot22
d2f10a8ae7e97fe5c64c65865c4158fa5e40cdfe
2022-07-23T10:42:11.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-classification
false
sudo-s
null
sudo-s/robot22
1
null
transformers
33,208
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: robot22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robot22 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem6 dataset. It achieves the following results on the evaluation set: - Loss: 2.5674 - Accuracy: 0.5077 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.9154 | 0.23 | 100 | 3.8417 | 0.2213 | | 3.1764 | 0.47 | 200 | 3.2243 | 0.3201 | | 2.8186 | 0.7 | 300 | 2.7973 | 0.4284 | | 2.632 | 0.93 | 400 | 2.5674 | 0.5077 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.3.2 - Tokenizers 0.12.1
sudo-s/modeversion1_m6_e4
6c185c6ed098881254f6620a9a1814a7b67a75cf
2022-07-24T05:08:50.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers" ]
image-classification
false
sudo-s
null
sudo-s/modeversion1_m6_e4
1
null
transformers
33,209
Entry not found
Siyong/M_RN
a039c579c3aef47812bb2a16f6eda68d291368ef
2022-07-23T14:00:34.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Siyong
null
Siyong/M_RN
1
null
transformers
33,210
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: MilladRN results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MilladRN This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4355 - Wer: 0.4907 - Cer: 0.2802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 750 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | 3.3347 | 33.9 | 2000 | 2.2561 | 0.9888 | 0.6087 | | 1.3337 | 67.8 | 4000 | 1.8137 | 0.6877 | 0.3407 | | 0.6504 | 101.69 | 6000 | 2.0718 | 0.6245 | 0.3229 | | 0.404 | 135.59 | 8000 | 2.2246 | 0.6004 | 0.3221 | | 0.2877 | 169.49 | 10000 | 2.2624 | 0.5836 | 0.3107 | | 0.2149 | 203.39 | 12000 | 2.3788 | 0.5279 | 0.2802 | | 0.1693 | 237.29 | 14000 | 1.8928 | 0.5502 | 0.2937 | | 0.1383 | 271.19 | 16000 | 2.7520 | 0.5725 | 0.3103 | | 0.1169 | 305.08 | 18000 | 2.2552 | 0.5446 | 0.2968 | | 0.1011 | 338.98 | 20000 | 2.6794 | 0.5725 | 0.3119 | | 0.0996 | 372.88 | 22000 | 2.4704 | 0.5595 | 0.3142 | | 0.0665 | 406.78 | 24000 | 2.9073 | 0.5836 | 0.3194 | | 0.0538 | 440.68 | 26000 | 3.1357 | 0.5632 | 0.3213 | | 0.0538 | 474.58 | 28000 | 2.5639 | 0.5613 | 0.3091 | | 0.0493 | 508.47 | 30000 | 3.3801 | 0.5613 | 0.3119 | | 0.0451 | 542.37 | 32000 | 3.5469 | 0.5428 | 0.3158 | | 0.0307 | 576.27 | 34000 | 4.2243 | 0.5390 | 0.3126 | | 0.0301 | 610.17 | 36000 | 3.6666 | 0.5297 | 0.2929 | | 0.0269 | 644.07 | 38000 | 3.2164 | 0.5 | 0.2838 | | 0.0182 | 677.97 | 40000 | 3.0557 | 0.4963 | 0.2779 | | 0.0191 | 711.86 | 42000 | 3.5190 | 0.5130 | 0.2921 | | 0.0133 | 745.76 | 44000 | 3.4355 | 0.4907 | 0.2802 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
Aktsvigun/bart-base_abssum_wikihow_all_9478495
1a4abba61f248088b6750761d912ee873c0c6e96
2022-07-23T11:46:37.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_abssum_wikihow_all_9478495
1
null
transformers
33,211
Entry not found
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s3
40430559aa8c5320631cc9ce0acb4729d2a37ce4
2022-07-23T14:28:41.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s3
1
null
transformers
33,212
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s3 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s803
b7fdeb1bc1b87cba95f4422394659a37d756defc
2022-07-23T14:32:51.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s803
1
null
transformers
33,213
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s803 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s95
1da1bb22b7b20b1d86402b57ed65afa43df9777b
2022-07-23T14:37:47.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s95
1
null
transformers
33,214
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s95 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s103
4b16a69d73f3789f8c8265c737a4f0932d2d4913
2022-07-25T02:46:08.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s103
1
null
transformers
33,215
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s103 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
techsword/ASCEND-wav2vec2-chinese-zh-cn
3f0297278ad56380d678480e921985b0f1957b91
2022-07-23T20:11:41.000Z
[ "pytorch", "wav2vec2", "feature-extraction", "transformers" ]
feature-extraction
false
techsword
null
techsword/ASCEND-wav2vec2-chinese-zh-cn
1
null
transformers
33,216
Entry not found
techsword/wav2vec-fame-dutch
71f3e89873186b624a849793282b5cfafcc2de2c
2022-07-23T21:01:48.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
techsword
null
techsword/wav2vec-fame-dutch
1
null
transformers
33,217
Entry not found
huggingtweets/vgdunkey-vgdunkeybot-videobotdunkey
d498b3fa5f92969c120f642b67c585db3d64bd9e
2022-07-23T21:11:28.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/vgdunkey-vgdunkeybot-videobotdunkey
1
null
transformers
33,218
--- language: en thumbnail: http://www.huggingtweets.com/vgdunkey-vgdunkeybot-videobotdunkey/1658610683659/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/676614171849453568/AZd1Bh-s_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/727879199931944961/vkkeC6d2_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/889145771760680960/F3g-pbn2_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">dunkey & dunkey bot & dunkey bot</div> <div style="text-align: center; font-size: 14px;">@vgdunkey-vgdunkeybot-videobotdunkey</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from dunkey & dunkey bot & dunkey bot. | Data | dunkey | dunkey bot | dunkey bot | | --- | --- | --- | --- | | Tweets downloaded | 1282 | 3200 | 911 | | Retweets | 147 | 0 | 1 | | Short tweets | 327 | 526 | 33 | | Tweets kept | 808 | 2674 | 877 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1gs4ik1d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vgdunkey-vgdunkeybot-videobotdunkey's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/qqqwy9dp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/qqqwy9dp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/vgdunkey-vgdunkeybot-videobotdunkey') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/bicyclingmag-bike24net-planetcyclery
4cd8ea253bc32506e28a607dadaba5b36e1513e4
2022-07-23T21:47:24.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/bicyclingmag-bike24net-planetcyclery
1
null
transformers
33,219
--- language: en thumbnail: http://www.huggingtweets.com/bicyclingmag-bike24net-planetcyclery/1658612826681/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/596705203358801920/mQ6ZGz9R_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/781477479332577280/OOud15hY_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/837440117505585152/kquV327z_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Bicycling Magazine & BIKE24 & Planet Cyclery</div> <div style="text-align: center; font-size: 14px;">@bicyclingmag-bike24net-planetcyclery</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Bicycling Magazine & BIKE24 & Planet Cyclery. | Data | Bicycling Magazine | BIKE24 | Planet Cyclery | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3200 | 1636 | | Retweets | 3 | 42 | 48 | | Short tweets | 31 | 231 | 22 | | Tweets kept | 3216 | 2927 | 1566 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dpmz7fyw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bicyclingmag-bike24net-planetcyclery's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15ynynm2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15ynynm2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bicyclingmag-bike24net-planetcyclery') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Aktsvigun/bart-base_abssum_wikihow_all_7629317
1ef3eae3874e713ddd32261bb0bd7cbb4a97bea8
2022-07-23T22:14:37.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_abssum_wikihow_all_7629317
1
null
transformers
33,220
Entry not found
circulus/kobart-trans-gyeongsang-v1
ed0cc46f35935ef934c5ecf58867fd621f310e6d
2022-07-25T06:48:10.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
circulus
null
circulus/kobart-trans-gyeongsang-v1
1
null
transformers
33,221
KoBART 기반 경상도 사투리 스타일 변경 - AI-HUB 의 경상도 사투리 데이터 셋을 통해 훈련되었습니다. - 사용방법은 곧 올리도록 하겠습니다.
circulus/kobart-trans-formal-v1
bc629a353f4a711c60852cee583e1244f9d16a8f
2022-07-24T01:59:24.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
circulus
null
circulus/kobart-trans-formal-v1
1
null
transformers
33,222
Entry not found
circulus/kobart-trans-jeolla-v1
1dbb72e7d2695d4c24c4261b1392907d317d35ce
2022-07-25T06:47:52.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
circulus
null
circulus/kobart-trans-jeolla-v1
1
null
transformers
33,223
KoBART 기반 전라도 사투리 스타일 변경 - AI-HUB 의 전라도 사투리 데이터 셋을 통해 훈련되었습니다. - 사용방법은 곧 올리도록 하겠습니다.
Aktsvigun/bart-base_abssum_wikihow_all_8653685
1ae7c3d13cd3d51c082c91775ca569ec24c95ca1
2022-07-24T08:27:53.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_abssum_wikihow_all_8653685
1
null
transformers
33,224
Entry not found
ArnavL/roberta-one_mil-imdb-0
c9e9a507a3d895ef0caecc8cffbc3083296e191b
2022-07-24T11:32:23.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
ArnavL
null
ArnavL/roberta-one_mil-imdb-0
1
null
transformers
33,225
Entry not found
SummerChiam/pond_image_classification_1
e98e59cfff05f96a9a7255080a4fc81bf864ee1b
2022-07-24T14:18:14.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "huggingpics", "model-index" ]
image-classification
false
SummerChiam
null
SummerChiam/pond_image_classification_1
1
null
transformers
33,226
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9948979616165161 --- # pond_image_classification Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
phamvanlinh143/dummy-model
48be254e772b4155c52dff6a8fb705d7f7b546ee
2022-07-24T16:11:05.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
phamvanlinh143
null
phamvanlinh143/dummy-model
1
null
transformers
33,227
Entry not found
Aktsvigun/bart-base_abssum_wikihow_all_5893459
f5070d415310b8b8dd28332520aebb2eead719f5
2022-07-24T18:14:32.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_abssum_wikihow_all_5893459
1
null
transformers
33,228
Entry not found
Konstantine4096/bart-large-pizza-50K
b607f4fed71262292d48dd2a4ff463c825f22f6f
2022-07-24T20:15:04.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Konstantine4096
null
Konstantine4096/bart-large-pizza-50K
1
null
transformers
33,229
Entry not found
Konstantine4096/bart-large-pizza-20K
5ebe1dbb03e6832dfd669a32c24bd4086b390b2c
2022-07-25T01:06:26.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Konstantine4096
null
Konstantine4096/bart-large-pizza-20K
1
null
transformers
33,230
Entry not found
muhtasham/bertiny-finetuned-finer
5ade233d838dad2a8de8f9fcb48b2c970f4fbf01
2022-07-25T01:33:49.000Z
[ "pytorch", "bert", "token-classification", "dataset:finer-139", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
muhtasham
null
muhtasham/bertiny-finetuned-finer
1
1
transformers
33,231
--- license: apache-2.0 tags: - generated_from_trainer datasets: - finer-139 metrics: - precision - recall - f1 - accuracy model-index: - name: bertiny-finetuned-finer results: - task: name: Token Classification type: token-classification dataset: name: finer-139 type: finer-139 args: finer-139 metrics: - name: Precision type: precision value: 0.5339285714285714 - name: Recall type: recall value: 0.036011080332409975 - name: F1 type: f1 value: 0.06747151077513258 - name: Accuracy type: accuracy value: 0.9847166143263048 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertiny-finetuned-finer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the finer-139 dataset. It achieves the following results on the evaluation set: - Loss: 0.0882 - Precision: 0.5339 - Recall: 0.0360 - F1: 0.0675 - Accuracy: 0.9847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0871 | 1.0 | 11255 | 0.0952 | 0.0 | 0.0 | 0.0 | 0.9843 | | 0.0864 | 2.0 | 22510 | 0.0895 | 0.7640 | 0.0082 | 0.0162 | 0.9844 | | 0.0929 | 3.0 | 33765 | 0.0882 | 0.5339 | 0.0360 | 0.0675 | 0.9847 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
muhtasham/bertiny-finetuned-finer-longer
055180146c0f172b3f2b204b20f39560b528f4b0
2022-07-27T04:36:44.000Z
[ "pytorch", "bert", "token-classification", "dataset:finer-139", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
muhtasham
null
muhtasham/bertiny-finetuned-finer-longer
1
null
transformers
33,232
--- license: apache-2.0 tags: - generated_from_trainer datasets: - finer-139 metrics: - precision - recall - f1 - accuracy model-index: - name: bertiny-finetuned-finer-full results: - task: name: Token Classification type: token-classification dataset: name: finer-139 type: finer-139 args: finer-139 metrics: - name: Precision type: precision value: 0.555368475586064 - name: Recall type: recall value: 0.5164398410213176 - name: F1 type: f1 value: 0.5351972041937094 - name: Accuracy type: accuracy value: 0.988733187308122 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertiny-finetuned-finer-full This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the 10% of finer-139 dataset for 40 epochs according to paper. It achieves the following results on the evaluation set: - Loss: 0.0788 - Precision: 0.5554 - Recall: 0.5164 - F1: 0.5352 - Accuracy: 0.9887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0852 | 1.0 | 11255 | 0.0929 | 1.0 | 0.0001 | 0.0002 | 0.9843 | | 0.08 | 2.0 | 22510 | 0.0840 | 0.4626 | 0.0730 | 0.1261 | 0.9851 | | 0.0759 | 3.0 | 33765 | 0.0750 | 0.5113 | 0.2035 | 0.2912 | 0.9865 | | 0.0569 | 4.0 | 45020 | 0.0673 | 0.4973 | 0.3281 | 0.3953 | 0.9872 | | 0.0488 | 5.0 | 56275 | 0.0635 | 0.5289 | 0.3749 | 0.4388 | 0.9878 | | 0.0422 | 6.0 | 67530 | 0.0606 | 0.5258 | 0.4068 | 0.4587 | 0.9880 | | 0.0364 | 7.0 | 78785 | 0.0600 | 0.5588 | 0.4186 | 0.4787 | 0.9883 | | 0.0307 | 8.0 | 90040 | 0.0589 | 0.5223 | 0.4916 | 0.5065 | 0.9883 | | 0.0284 | 9.0 | 101295 | 0.0595 | 0.5588 | 0.4813 | 0.5171 | 0.9887 | | 0.0255 | 10.0 | 112550 | 0.0597 | 0.5606 | 0.4944 | 0.5254 | 0.9888 | | 0.0223 | 11.0 | 123805 | 0.0600 | 0.5533 | 0.4998 | 0.5252 | 0.9888 | | 0.0228 | 12.0 | 135060 | 0.0608 | 0.5290 | 0.5228 | 0.5259 | 0.9885 | | 0.0225 | 13.0 | 146315 | 0.0612 | 0.5480 | 0.5111 | 0.5289 | 0.9887 | | 0.0204 | 14.0 | 157570 | 0.0634 | 0.5646 | 0.5120 | 0.5370 | 0.9890 | | 0.0176 | 15.0 | 168825 | 0.0639 | 0.5611 | 0.5135 | 0.5363 | 0.9889 | | 0.0167 | 16.0 | 180080 | 0.0647 | 0.5631 | 0.5120 | 0.5363 | 0.9888 | | 0.0161 | 17.0 | 191335 | 0.0665 | 0.5607 | 0.5081 | 0.5331 | 0.9889 | | 0.0145 | 18.0 | 202590 | 0.0673 | 0.5437 | 0.5280 | 0.5357 | 0.9887 | | 0.0166 | 19.0 | 213845 | 0.0687 | 0.5722 | 0.5008 | 0.5341 | 0.9889 | | 0.0155 | 20.0 | 225100 | 0.0685 | 0.5325 | 0.5337 | 0.5331 | 0.9885 | | 0.0142 | 21.0 | 236355 | 0.0705 | 0.5626 | 0.5166 | 0.5386 | 0.9890 | | 0.0127 | 22.0 | 247610 | 0.0694 | 0.5426 | 0.5358 | 0.5392 | 0.9887 | | 0.0112 | 23.0 | 258865 | 0.0721 | 0.5591 | 0.5129 | 0.5351 | 0.9888 | | 0.0123 | 24.0 | 270120 | 0.0733 | 0.5715 | 0.5081 | 0.5380 | 0.9889 | | 0.0116 | 25.0 | 281375 | 0.0735 | 0.5621 | 0.5123 | 0.5361 | 0.9888 | | 0.0112 | 26.0 | 292630 | 0.0739 | 0.5634 | 0.5181 | 0.5398 | 0.9889 | | 0.0108 | 27.0 | 303885 | 0.0753 | 0.5548 | 0.5155 | 0.5344 | 0.9887 | | 0.0125 | 28.0 | 315140 | 0.0746 | 0.5507 | 0.5221 | 0.5360 | 0.9886 | | 0.0093 | 29.0 | 326395 | 0.0762 | 0.5602 | 0.5156 | 0.5370 | 0.9888 | | 0.0094 | 30.0 | 337650 | 0.0762 | 0.5625 | 0.5157 | 0.5381 | 0.9889 | | 0.0117 | 31.0 | 348905 | 0.0767 | 0.5519 | 0.5195 | 0.5352 | 0.9887 | | 0.0091 | 32.0 | 360160 | 0.0772 | 0.5501 | 0.5198 | 0.5345 | 0.9887 | | 0.0109 | 33.0 | 371415 | 0.0775 | 0.5635 | 0.5097 | 0.5353 | 0.9888 | | 0.0094 | 34.0 | 382670 | 0.0776 | 0.5467 | 0.5216 | 0.5339 | 0.9887 | | 0.009 | 35.0 | 393925 | 0.0782 | 0.5601 | 0.5139 | 0.5360 | 0.9889 | | 0.0093 | 36.0 | 405180 | 0.0780 | 0.5568 | 0.5156 | 0.5354 | 0.9888 | | 0.0087 | 37.0 | 416435 | 0.0783 | 0.5588 | 0.5143 | 0.5356 | 0.9888 | | 0.009 | 38.0 | 427690 | 0.0785 | 0.5483 | 0.5178 | 0.5326 | 0.9887 | | 0.0094 | 39.0 | 438945 | 0.0787 | 0.5541 | 0.5154 | 0.5340 | 0.9887 | | 0.0088 | 40.0 | 450200 | 0.0788 | 0.5554 | 0.5164 | 0.5352 | 0.9887 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
eclat12450/fine-tuned-NSPbert-14
3a8548cc2c607d6d31862202e2480684516ebaf0
2022-07-25T02:35:01.000Z
[ "pytorch", "bert", "next-sentence-prediction", "transformers" ]
null
false
eclat12450
null
eclat12450/fine-tuned-NSPbert-14
1
null
transformers
33,233
Entry not found
gciaffoni/wav2vec2-large-xls-r-300m-it-colab6up
3cf50f21f8461af8a0f0b6f3107cfef48ae2b394
2022-07-25T03:06:40.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
gciaffoni
null
gciaffoni/wav2vec2-large-xls-r-300m-it-colab6up
1
null
transformers
33,234
Entry not found
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s377
fcd4f586ae7dcf4bcc4043f581b169c17dce7efb
2022-07-25T02:51:01.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s377
1
null
transformers
33,235
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s377 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s756
65e0ca1ba053193d0c66e433a58676ecc05b78b6
2022-07-25T02:56:11.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s756
1
null
transformers
33,236
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s756 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s527
8bd3b236e57897baca97c54340371031d9d96dfc
2022-07-25T03:01:24.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s527
1
null
transformers
33,237
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s527 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s545
563008878c4348d9cb111fd65dd771be24edc0eb
2022-07-25T03:06:09.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s545
1
null
transformers
33,238
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s545 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s779
9be197aec34871f08a856ab90ad8828438ee9ccb
2022-07-25T03:11:07.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s779
1
null
transformers
33,239
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-10_austria-0_s779 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s468
1ff231fac5344e247ebae3333bbd58e14b376dfa
2022-07-25T03:15:54.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s468
1
null
transformers
33,240
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s468 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s732
7f9f60a64b20391c474fdbb0886fec52cbed26d4
2022-07-25T03:20:42.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s732
1
null
transformers
33,241
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s732 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s957
4be65f445785f68ed6f0a0c113952d68d257c7bf
2022-07-25T03:25:15.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s957
1
null
transformers
33,242
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-2_austria-8_s957 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s445
73446115c40c3307da8da03e98b7cc8379c1d523
2022-07-25T03:29:52.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s445
1
null
transformers
33,243
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s445 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s807
a8f9c00e5d0e220a41069eb8fa9ffd37ca082bec
2022-07-25T03:34:41.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s807
1
null
transformers
33,244
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s807 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s953
e559d6c2f4eaa6510a28f4e52a44e60ee8fc587e
2022-07-25T03:39:23.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s953
1
null
transformers
33,245
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s953 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-5_female-5_s286
62f007feb57a4905a99968873fc584c7008e32e3
2022-07-25T03:44:19.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-5_female-5_s286
1
null
transformers
33,246
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-5_female-5_s286 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-5_female-5_s34
a2843107530c5396614e6897c7568c4fba6b3373
2022-07-25T03:49:16.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-5_female-5_s34
1
null
transformers
33,247
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-5_female-5_s34 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-5_female-5_s841
362a0435572ab4565d3703fc4c425e2851108cd6
2022-07-25T03:53:42.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-5_female-5_s841
1
null
transformers
33,248
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-5_female-5_s841 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-0_female-10_s601
402a0545d4a37c341c4ca29835c8208cf32813ac
2022-07-25T03:58:35.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-0_female-10_s601
1
null
transformers
33,249
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-0_female-10_s601 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-0_female-10_s801
b3a7eecd42404ebbf84009bc5b07855fd46fd791
2022-07-25T04:04:30.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-0_female-10_s801
1
null
transformers
33,250
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-0_female-10_s801 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-0_female-10_s889
d94735ba9ca3a271e32b6f60515306131ee3bdc2
2022-07-25T04:09:25.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-0_female-10_s889
1
null
transformers
33,251
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-0_female-10_s889 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-10_female-0_s325
456948235bd4d3b32e791da2a91fb4e5c96f9f68
2022-07-25T04:14:27.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-10_female-0_s325
1
null
transformers
33,252
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-10_female-0_s325 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
Maxaontrix/distilbert-base-uncased-finetuned-ner-finetuned-ner
890b93e8554782f57c23166ebac39bc57a5ff893
2022-07-25T06:39:25.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:skript", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
Maxaontrix
null
Maxaontrix/distilbert-base-uncased-finetuned-ner-finetuned-ner
1
null
transformers
33,253
--- tags: - generated_from_trainer datasets: - skript metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: skript type: skript args: conll2003 metrics: - name: Precision type: precision value: 0.058091286307053944 - name: Recall type: recall value: 0.04498714652956298 - name: F1 type: f1 value: 0.05070626584570808 - name: Accuracy type: accuracy value: 0.7974446689319497 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner-finetuned-ner This model was trained from scratch on the skript dataset. It achieves the following results on the evaluation set: - Loss: 0.6713 - Precision: 0.0581 - Recall: 0.0450 - F1: 0.0507 - Accuracy: 0.7974 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 44 | 0.8207 | 0.0 | 0.0 | 0.0 | 0.7748 | | No log | 2.0 | 88 | 0.7113 | 0.0405 | 0.0231 | 0.0294 | 0.7889 | | No log | 3.0 | 132 | 0.6713 | 0.0581 | 0.0450 | 0.0507 | 0.7974 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-10_female-0_s504
3aca13a9d569eb44617833436623ea15344d32fb
2022-07-25T04:19:07.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-10_female-0_s504
1
null
transformers
33,254
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-10_female-0_s504 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-10_female-0_s75
622cdcf76e791430e76133fe84f39daad737dac6
2022-07-25T04:24:10.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-10_female-0_s75
1
null
transformers
33,255
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-10_female-0_s75 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-2_female-8_s108
f1c5bf0ecd6d4e5a28dc2a378823d739112908de
2022-07-25T04:29:06.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-2_female-8_s108
1
null
transformers
33,256
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-2_female-8_s108 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-2_female-8_s211
82d11a9492858ef88494c24fd919869262db555b
2022-07-25T04:33:58.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-2_female-8_s211
1
null
transformers
33,257
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-2_female-8_s211 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-2_female-8_s364
2d701aa315f515f5ef07293a841a9b91b246021c
2022-07-25T04:38:52.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-2_female-8_s364
1
null
transformers
33,258
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-2_female-8_s364 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
emilylearning/cond_ft_none_on_reddit__prcnt_na__test_run_True__bert-base-uncased
1b398433a27a0fcdb98a86f605ee089f5e796dbd
2022-07-26T05:20:57.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
emilylearning
null
emilylearning/cond_ft_none_on_reddit__prcnt_na__test_run_True__bert-base-uncased
1
null
transformers
33,259
Entry not found
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-8_female-2_s129
fd84eb02049b36d248a007de1605bf5a40c3562c
2022-07-25T04:43:26.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-8_female-2_s129
1
null
transformers
33,260
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-8_female-2_s129 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
emilylearning/cond_ft_subreddit_on_reddit__prcnt_na__test_run_True__bert-base-uncased
ee3010046f5f5486c83af4563437f187865ad45c
2022-07-26T05:51:24.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
emilylearning
null
emilylearning/cond_ft_subreddit_on_reddit__prcnt_na__test_run_True__bert-base-uncased
1
null
transformers
33,261
Entry not found
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-8_female-2_s874
83cbe97200a5d1b51fbd4e2da327598a98db26d2
2022-07-25T04:53:00.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-8_female-2_s874
1
null
transformers
33,262
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_gender_male-8_female-2_s874 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-5_england-5_s203
b26c5f9d8a116b438296ff8cab2c51f6ab35a73d
2022-07-25T04:57:41.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-5_england-5_s203
1
null
transformers
33,263
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-5_england-5_s203 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-5_england-5_s878
5b7cab82550c1e407012f26e3f0928c746f40de4
2022-07-25T05:02:23.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-5_england-5_s878
1
null
transformers
33,264
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-5_england-5_s878 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-5_england-5_s924
9b1fb231bc6cdf0ca9dba85d11e4995e71254117
2022-07-25T05:07:34.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-5_england-5_s924
1
null
transformers
33,265
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-5_england-5_s924 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-0_england-10_s227
cf0be16e9f289702999eb917b985b98d4893a87f
2022-07-25T05:13:29.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-0_england-10_s227
1
null
transformers
33,266
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-0_england-10_s227 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-0_england-10_s809
e27543b82bb81062cae0a3b05fd8dfec5095ce3d
2022-07-25T05:19:31.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-0_england-10_s809
1
null
transformers
33,267
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-0_england-10_s809 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-10_england-0_s44
77f7d355d0ba7b78480d107de52163f00790c09c
2022-07-25T05:28:41.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-10_england-0_s44
1
null
transformers
33,268
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-10_england-0_s44 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-10_england-0_s863
332dfe16370779fb97cf552f30404bab6aeca771
2022-07-25T05:33:32.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-10_england-0_s863
1
null
transformers
33,269
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-10_england-0_s863 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-10_england-0_s93
986b714d459e7a3481604323b0f789e8a7fc27b4
2022-07-25T05:38:05.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-10_england-0_s93
1
null
transformers
33,270
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-10_england-0_s93 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-2_england-8_s251
16f688f9efadf7bf3b7a8628628e1f46e237a4a9
2022-07-25T05:43:01.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-2_england-8_s251
1
null
transformers
33,271
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-2_england-8_s251 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-2_england-8_s456
5001490469a04f5a68ecab29428e9db7d58b26b0
2022-07-25T05:47:42.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-2_england-8_s456
1
null
transformers
33,272
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-2_england-8_s456 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-2_england-8_s459
f3b51edc4ebd1b90a70e64354464e7bcdfb6f27a
2022-07-25T05:52:22.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-2_england-8_s459
1
null
transformers
33,273
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-2_england-8_s459 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-8_england-2_s596
eca30b88ee67209c0c87de3562b7d3eeb4b5192e
2022-07-25T05:57:09.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-8_england-2_s596
1
null
transformers
33,274
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-8_england-2_s596 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-8_england-2_s875
5273ec30b9a4b3924a4942fd7d91f9881369fb46
2022-07-25T06:01:57.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-8_england-2_s875
1
null
transformers
33,275
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-8_england-2_s875 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-8_england-2_s877
a727243b0b3f9a3ccfe953626e17b4d6d2ab8c53
2022-07-25T06:06:45.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-8_england-2_s877
1
null
transformers
33,276
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_accent_us-8_england-2_s877 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-5_female-5_s186
96ea658d491e850b1a309d2709eabab8d6dc6ab7
2022-07-25T06:11:30.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-5_female-5_s186
1
null
transformers
33,277
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-5_female-5_s186 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-5_female-5_s474
9ca7ccf56f836f2952a6a69cf15052704e511245
2022-07-25T06:16:18.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-5_female-5_s474
1
null
transformers
33,278
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-5_female-5_s474 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-5_female-5_s952
b5355dbc604b6cf622529524ca62bbae1292e9e6
2022-07-25T06:20:57.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-5_female-5_s952
1
null
transformers
33,279
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-5_female-5_s952 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-0_female-10_s169
7e739c2b38c70a0cc89e704f075466d60a9d49eb
2022-07-25T06:25:38.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-0_female-10_s169
1
null
transformers
33,280
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-0_female-10_s169 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-0_female-10_s281
b9fe50b8c153d88720d9c44c93654c8c5731116c
2022-07-25T06:30:15.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-0_female-10_s281
1
null
transformers
33,281
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-0_female-10_s281 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-0_female-10_s980
a023e0e2a0e6ddf4aa842b0035bf792ade19ad7b
2022-07-25T06:35:05.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-0_female-10_s980
1
null
transformers
33,282
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-0_female-10_s980 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-10_female-0_s118
d004c4d4d5b0c91155ac95fc970d8f8c4c1ad7fd
2022-07-25T06:39:36.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-10_female-0_s118
1
null
transformers
33,283
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-10_female-0_s118 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-10_female-0_s51
aefd80f3ca2de037fe971d2d3baa17d4ea42ce24
2022-07-25T06:44:24.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-10_female-0_s51
1
null
transformers
33,284
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-10_female-0_s51 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-10_female-0_s691
39f384b4d870b634d68efe8518749b742977c5af
2022-07-25T06:50:06.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-10_female-0_s691
1
null
transformers
33,285
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-10_female-0_s691 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-2_female-8_s179
9123953ba5064436afc6fd2aea4ebd103637e15f
2022-07-25T06:54:39.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-2_female-8_s179
1
null
transformers
33,286
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-2_female-8_s179 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-2_female-8_s320
33d7a393ff5d3bf72466d419bcfbf05185f92e96
2022-07-25T06:59:14.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-2_female-8_s320
1
null
transformers
33,287
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-2_female-8_s320 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-2_female-8_s438
716b91bbedc410c085a776774cd4ec6ce7d679b8
2022-07-25T07:03:55.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-2_female-8_s438
1
null
transformers
33,288
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-2_female-8_s438 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-8_female-2_s250
cca7e048962051c28111a6dcbab81180f9d7607b
2022-07-25T07:08:25.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-8_female-2_s250
1
null
transformers
33,289
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-8_female-2_s250 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-8_female-2_s515
dc0994d881c641daa64afc48d9f37ae19d6018bb
2022-07-25T07:13:08.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-8_female-2_s515
1
null
transformers
33,290
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-8_female-2_s515 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-8_female-2_s859
57d141fb881984a934c4dfae2f4bb285f795f6fe
2022-07-25T07:18:00.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-8_female-2_s859
1
null
transformers
33,291
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_gender_male-8_female-2_s859 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s324
f844665b7bf7982d67a9c535074e70cebfd16cd3
2022-07-25T07:22:43.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s324
1
null
transformers
33,292
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s324 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s411
629ce01723e29fc75d939433942f0f211d1ff1f9
2022-07-25T07:29:05.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s411
1
null
transformers
33,293
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s411 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
thu-coai/EVA1.0
6fa708325eabba8d7f014557bedfb3cd87e1b185
2022-07-25T09:17:23.000Z
[ "pytorch", "zh", "arxiv:2108.01547", "arxiv:2203.09313", "transformers", "license:mit" ]
null
false
thu-coai
null
thu-coai/EVA1.0
1
null
transformers
33,294
--- language: zh tags: - pytorch license: mit --- # EVA ## Model Description EVA is the largest open-source Chinese dialogue model with up to 2.8B parameters. The 1.0 version model is pre-trained on [WudaoCorpus-Dialog](https://resource.wudaoai.cn/home), and the 2.0 version is pre-trained on a carefully cleaned version of WudaoCorpus-Dialog which yields better performance than the 1.0 version. [Paper link](https://arxiv.org/abs/2108.01547) of EVA1.0. [Paper link](https://arxiv.org/abs/2203.09313) of EVA2.0. ## Model Configuration | Model | n_params | n_enc-layers | n_dec-layers | d_model | d_ff | n_heads | d_head | attn-scale | | ------------- | -------- | ------------ | ------------ | ------- | ----- | ------- | ------ | ---------- | | EVA1.0 | 2.8B | 24 | 24 | 2,048 | 5,120 | 32 | 64 | No | | EVA2.0_Base | 300M | 12 | 12 | 768 | 3,072 | 12 | 64 | Yes | | EVA2.0_Large | 970M | 24 | 24 | 1,024 | 4,096 | 16 | 64 | Yes | | EVA2.0_xLarge | 2.8B | 24 | 24 | 2,048 | 5,120 | 32 | 64 | Yes | ## How to use Please refer to our GitHub [repository](https://github.com/thu-coai/EVA/tree/huggingface). ## Performance We use the KdConv dataset to finetune and evaluate the model. Under the default hyperparameters in the scripts, we get the following results on the test set: | | | | | | | ---- | ---- | ---- | ---- | ---- | | | loss | BLEU-4 | Dist-4 | f1 | | eva1.0 Before finetune | 3.49 | 0.64 | 85.94 | 13.24 | | eva1.0 After finetune | 2.37 | 3.94 | 72.47 | 19.80 | | eva2.0 Before finetune | 3.14 | 1.02 | 82.67 | 14.36 | | eva2.0 After finetune | 2.09 | 4.69 | 73.03 | 22.91 | We also use beam search to run the interactive inference of EVA2.0: ```[] Usr >>> 你好 Sys >>> 你好,请问你是? Usr >>> 我是小明 Sys >>> 好的,小明,我是新来的,请多关照 Usr >>> 你了解人工智能吗? Sys >>> 不了解,我只知道我是机器人 Usr >>> 那你很聪明 Sys >>> 谢谢夸奖,我会继续努力的 Usr >>> 你是学什么专业的呀 Sys >>> 计算机科学与技术,你呢? Usr >>> 我也是学计算机的 Sys >>> 哦,那你有什么想知道的,可以问我 Usr >>> 编程语言你学过哪些? Sys >>> java,c,c+都学过 Usr >>> 你学的真多 Sys >>> 呵呵,还好吧,我比较喜欢写代码 Usr >>> 哈哈,我也喜欢写代码,以后可以多多交流 Sys >>> 嗯,好的 Usr >>> 拜拜喽 Sys >>> 拜~ ``` **NOET**: Since different hardware may have different implementations of random functions, even if you use the same random seed as ours, you may not be able to reproduce this case. But the overall performance will not differ much. ## Disclaimer The pre-trained models aim to facilitate the research for conversation generation. The model provided in this repository is trained on a large dataset collected from various sources. Although a rigorous cleaning and filtering process has been carried out to the data and the model output, there is no guarantee that all the inappropriate contents have been completely banned. All the contents generated by the model do not represent the authors' opinions. The decoding script provided in this repository is only for research purposes. We are not responsible for any content generated using our model. ## Citation ``` @article{coai2021eva, title={EVA: An Open-Domain Chinese Dialogue System with Large-Scale Generative Pre-Training}, author={Zhou, Hao and Ke, Pei and Zhang, Zheng and Gu, Yuxian and Zheng, Yinhe and Zheng, Chujie and Wang, Yida and Wu, Chen Henry and Sun, Hao and Yang, Xiaocong and Wen, Bosi and Zhu, Xiaoyan and Huang, Minlie and Tang, Jie}, journal={arXiv preprint arXiv:2108.01547}, year={2021} } @article{coai2022eva2, title={{EVA2.0}: Investigating Open-Domain Chinese Dialogue Systems with Large-Scale Pre-Training}, author={Gu, Yuxian and Wen, Jiaxin and Sun, Hao and Song, Yi and Ke, Pei and Zheng, Chujie and Zhang, Zheng and Yao, Jianzhu and Zhu, Xiaoyan and Tang, Jie and Huang, Minlie}, journal={arXiv preprint arXiv:2203.09313}, year={2022} } ```
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s965
1d350b935e40330be4acb770c52406cd0d41287d
2022-07-25T07:33:45.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s965
1
null
transformers
33,295
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_es_vp-100k_accent_surpeninsular-5_nortepeninsular-5_s965 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s211
f3187bf67407b08204f66a4a6a4dd777a13c61b4
2022-07-25T07:38:20.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s211
1
null
transformers
33,296
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s211 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s609
ce7a19564b311380c78bc0826269a50439122a29
2022-07-25T07:46:02.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s609
1
null
transformers
33,297
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s609 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s692
67fd3a0127b995e50c43ff25c6750f4d17b6f929
2022-07-25T07:50:51.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s692
1
null
transformers
33,298
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_es_vp-100k_accent_surpeninsular-0_nortepeninsular-10_s692 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-10_nortepeninsular-0_s222
ef59112ce59d2e79af567f431d9e062e0e9dbcac
2022-07-25T07:55:31.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-10_nortepeninsular-0_s222
1
null
transformers
33,299
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_es_vp-100k_accent_surpeninsular-10_nortepeninsular-0_s222 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.