modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2t_it_vp-100k_s449 | jonatasgrosman | 2022-07-08T19:01:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T19:00:43Z | ---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_vp-100k_s449
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_vp-100k_s149 | jonatasgrosman | 2022-07-08T18:56:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:55:24Z | ---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_vp-100k_s149
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
domenicrosati/SPECTER-with-biblio-context-finetuned-review_classifier | domenicrosati | 2022-07-08T18:53:09Z | 27 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-08T13:43:12Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: SPECTER-with-biblio-context-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-with-biblio-context-finetuned-review_classifier
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1284
- Accuracy: 0.962
- F1: 0.7892
- Recall: 0.7593
- Precision: 0.8216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1956 | 1.0 | 6667 | 0.1805 | 0.9514 | 0.7257 | 0.6860 | 0.7702 |
| 0.135 | 2.0 | 13334 | 0.1284 | 0.962 | 0.7892 | 0.7593 | 0.8216 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
saekomdalkom/long-t5-local-base-finetuned | saekomdalkom | 2022-07-08T18:48:45Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-07-01T05:40:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: long-t5-local-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-t5-local-base-finetuned
This model is a fine-tuned version of [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 9.2722
- Rouge1: 3.8848
- Rouge2: 0.5914
- Rougel: 3.5038
- Rougelsum: 3.7022
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 0.16 | 100 | 342.4395 | 0.0 | 0.0 | 0.0 | 0.0 | 19.0 |
| No log | 0.31 | 200 | 323.6985 | 0.0 | 0.0 | 0.0 | 0.0 | 19.0 |
| No log | 0.47 | 300 | 303.8767 | 0.0 | 0.0 | 0.0 | 0.0 | 19.0 |
| No log | 0.62 | 400 | 284.7559 | 0.0 | 0.0 | 0.0 | 0.0 | 19.0 |
| 295.8376 | 0.78 | 500 | 263.0420 | 0.0 | 0.0 | 0.0 | 0.0 | 19.0 |
| 295.8376 | 0.93 | 600 | 243.2220 | 0.0242 | 0.0 | 0.0223 | 0.0242 | 19.0 |
| 295.8376 | 1.09 | 700 | 224.4514 | 0.0493 | 0.0 | 0.0507 | 0.0513 | 19.0 |
| 295.8376 | 1.24 | 800 | 203.9065 | 0.0656 | 0.0 | 0.0634 | 0.0658 | 19.0 |
| 295.8376 | 1.4 | 900 | 184.8686 | 0.0609 | 0.0 | 0.058 | 0.0616 | 19.0 |
| 199.938 | 1.55 | 1000 | 167.5315 | 0.0638 | 0.0 | 0.0626 | 0.063 | 19.0 |
| 199.938 | 1.71 | 1100 | 151.2369 | 0.0421 | 0.0 | 0.0411 | 0.0413 | 19.0 |
| 199.938 | 1.86 | 1200 | 137.2366 | 0.0358 | 0.0 | 0.0346 | 0.0342 | 19.0 |
| 199.938 | 2.02 | 1300 | 125.3076 | 0.0173 | 0.0 | 0.0157 | 0.0157 | 19.0 |
| 199.938 | 2.17 | 1400 | 114.5600 | 0.0173 | 0.0 | 0.0157 | 0.0157 | 19.0 |
| 136.1309 | 2.33 | 1500 | 105.9237 | 0.0361 | 0.0 | 0.0344 | 0.0363 | 19.0 |
| 136.1309 | 2.48 | 1600 | 97.4123 | 0.0526 | 0.0 | 0.051 | 0.054 | 19.0 |
| 136.1309 | 2.64 | 1700 | 89.0873 | 0.0427 | 0.0 | 0.0407 | 0.0418 | 19.0 |
| 136.1309 | 2.79 | 1800 | 82.0562 | 0.0496 | 0.0 | 0.0462 | 0.0462 | 19.0 |
| 136.1309 | 2.95 | 1900 | 76.2360 | 0.0361 | 0.0 | 0.0345 | 0.0363 | 19.0 |
| 99.2229 | 3.1 | 2000 | 70.0604 | 0.0438 | 0.0 | 0.0425 | 0.0439 | 19.0 |
| 99.2229 | 3.26 | 2100 | 65.1038 | 0.0454 | 0.0 | 0.0441 | 0.0447 | 19.0 |
| 99.2229 | 3.41 | 2200 | 59.1831 | 0.0344 | 0.0 | 0.0318 | 0.0318 | 19.0 |
| 99.2229 | 3.57 | 2300 | 53.0313 | 0.0471 | 0.0 | 0.0448 | 0.0454 | 19.0 |
| 99.2229 | 3.72 | 2400 | 48.2110 | 0.0369 | 0.0 | 0.0369 | 0.0369 | 19.0 |
| 73.4208 | 3.88 | 2500 | 44.2004 | 0.0425 | 0.0 | 0.0427 | 0.044 | 19.0 |
| 73.4208 | 4.03 | 2600 | 40.1925 | 0.0632 | 0.0 | 0.0619 | 0.0612 | 19.0 |
| 73.4208 | 4.19 | 2700 | 36.3698 | 0.0887 | 0.0 | 0.0873 | 0.086 | 19.0 |
| 73.4208 | 4.34 | 2800 | 33.2154 | 0.164 | 0.0 | 0.1652 | 0.1705 | 19.0 |
| 73.4208 | 4.5 | 2900 | 30.9366 | 0.1106 | 0.0 | 0.1138 | 0.1144 | 19.0 |
| 55.6661 | 4.65 | 3000 | 28.5672 | 0.1289 | 0.0 | 0.1295 | 0.131 | 19.0 |
| 55.6661 | 4.81 | 3100 | 27.0910 | 0.2501 | 0.0 | 0.2514 | 0.2527 | 19.0 |
| 55.6661 | 4.96 | 3200 | 25.6666 | 0.318 | 0.0 | 0.3322 | 0.3203 | 19.0 |
| 55.6661 | 5.12 | 3300 | 24.6176 | 0.6319 | 0.0 | 0.6419 | 0.6299 | 19.0 |
| 55.6661 | 5.27 | 3400 | 23.6474 | 1.6632 | 0.0033 | 1.665 | 1.6244 | 19.0 |
| 45.1105 | 5.43 | 3500 | 22.7063 | 3.1374 | 0.0 | 3.1331 | 3.1333 | 19.0 |
| 45.1105 | 5.58 | 3600 | 21.9191 | 5.0757 | 0.0 | 5.0694 | 5.0456 | 19.0 |
| 45.1105 | 5.74 | 3700 | 21.3359 | 5.6576 | 0.0 | 5.689 | 5.6772 | 19.0 |
| 45.1105 | 5.89 | 3800 | 20.6990 | 5.828 | 0.0 | 5.8801 | 5.8688 | 19.0 |
| 45.1105 | 6.05 | 3900 | 20.1800 | 6.3727 | 0.0 | 6.3801 | 6.3716 | 19.0 |
| 39.6923 | 6.2 | 4000 | 19.7415 | 6.2209 | 0.0 | 6.2347 | 6.2368 | 19.0 |
| 39.6923 | 6.36 | 4100 | 19.2800 | 5.7215 | 0.0 | 5.7452 | 5.7295 | 19.0 |
| 39.6923 | 6.51 | 4200 | 18.9683 | 6.1018 | 0.0062 | 6.1 | 6.0935 | 19.0 |
| 39.6923 | 6.67 | 4300 | 18.5776 | 6.0354 | 0.0062 | 6.0227 | 6.0103 | 19.0 |
| 39.6923 | 6.82 | 4400 | 18.2629 | 5.4438 | 0.0062 | 5.441 | 5.4629 | 19.0 |
| 36.1688 | 6.98 | 4500 | 18.0268 | 5.3214 | 0.0091 | 5.3093 | 5.2992 | 19.0 |
| 36.1688 | 7.13 | 4600 | 17.7740 | 5.2223 | 0.0123 | 5.2132 | 5.2084 | 19.0 |
| 36.1688 | 7.29 | 4700 | 17.5345 | 5.178 | 0.0231 | 5.1615 | 5.1243 | 19.0 |
| 36.1688 | 7.44 | 4800 | 17.3846 | 5.3899 | 0.0277 | 5.3414 | 5.3534 | 19.0 |
| 36.1688 | 7.6 | 4900 | 17.1999 | 5.315 | 0.0272 | 5.2572 | 5.2477 | 19.0 |
| 33.5745 | 7.75 | 5000 | 17.0078 | 5.9014 | 0.028 | 5.8181 | 5.8058 | 19.0 |
| 33.5745 | 7.91 | 5100 | 16.6418 | 5.7546 | 0.0242 | 5.6903 | 5.6746 | 19.0 |
| 33.5745 | 8.06 | 5200 | 16.6330 | 6.6893 | 0.0182 | 6.6354 | 6.6178 | 19.0 |
| 33.5745 | 8.22 | 5300 | 16.3423 | 6.1679 | 0.0072 | 6.1518 | 6.128 | 19.0 |
| 33.5745 | 8.37 | 5400 | 16.2373 | 6.7659 | 0.0139 | 6.7271 | 6.7076 | 19.0 |
| 31.9486 | 8.53 | 5500 | 16.1523 | 7.1991 | 0.0139 | 7.1674 | 7.1283 | 19.0 |
| 31.9486 | 8.68 | 5600 | 16.0607 | 7.7042 | 0.0169 | 7.6741 | 7.6537 | 19.0 |
| 31.9486 | 8.84 | 5700 | 15.7647 | 7.1238 | 0.02 | 7.1113 | 7.0586 | 19.0 |
| 31.9486 | 8.99 | 5800 | 15.6194 | 7.3055 | 0.0116 | 7.3311 | 7.2683 | 19.0 |
| 31.9486 | 9.15 | 5900 | 15.4994 | 7.3365 | 0.0139 | 7.3026 | 7.2708 | 19.0 |
| 30.5224 | 9.3 | 6000 | 15.4207 | 8.1959 | 0.0116 | 8.1917 | 8.1651 | 19.0 |
| 30.5224 | 9.46 | 6100 | 15.2981 | 7.7936 | 0.0144 | 7.7826 | 7.7488 | 19.0 |
| 30.5224 | 9.61 | 6200 | 15.2391 | 7.95 | 0.0144 | 7.9371 | 7.895 | 19.0 |
| 30.5224 | 9.77 | 6300 | 15.0941 | 7.1669 | 0.0144 | 7.146 | 7.1251 | 19.0 |
| 30.5224 | 9.92 | 6400 | 14.9979 | 6.2157 | 0.0076 | 6.2086 | 6.1774 | 19.0 |
| 29.1236 | 10.08 | 6500 | 14.9523 | 7.4422 | 0.0137 | 7.3929 | 7.393 | 19.0 |
| 29.1236 | 10.23 | 6600 | 14.9515 | 7.2375 | 0.0137 | 7.1728 | 7.1779 | 19.0 |
| 29.1236 | 10.39 | 6700 | 14.8874 | 7.5071 | 0.0068 | 7.4544 | 7.4739 | 19.0 |
| 29.1236 | 10.54 | 6800 | 14.8057 | 5.9608 | 0.0169 | 5.8754 | 5.8691 | 19.0 |
| 29.1236 | 10.7 | 6900 | 14.6818 | 5.6345 | 0.021 | 5.5422 | 5.5331 | 19.0 |
| 28.314 | 10.85 | 7000 | 14.5409 | 5.5799 | 0.0169 | 5.4915 | 5.4833 | 19.0 |
| 28.314 | 11.01 | 7100 | 14.4512 | 4.3498 | 0.0368 | 4.2243 | 4.2193 | 19.0 |
| 28.314 | 11.16 | 7200 | 14.4560 | 4.0453 | 0.0372 | 3.9481 | 3.9228 | 19.0 |
| 28.314 | 11.32 | 7300 | 14.3851 | 5.1332 | 0.0426 | 5.0186 | 4.9882 | 19.0 |
| 28.314 | 11.47 | 7400 | 14.2265 | 4.8944 | 0.0371 | 4.7869 | 4.7765 | 19.0 |
| 27.5349 | 11.63 | 7500 | 14.1214 | 3.8846 | 0.0335 | 3.7882 | 3.7677 | 19.0 |
| 27.5349 | 11.78 | 7600 | 14.1505 | 3.9992 | 0.0514 | 3.883 | 3.8385 | 19.0 |
| 27.5349 | 11.94 | 7700 | 13.9923 | 3.4526 | 0.0664 | 3.325 | 3.3258 | 19.0 |
| 27.5349 | 12.09 | 7800 | 14.0299 | 2.3086 | 0.0346 | 2.25 | 2.219 | 19.0 |
| 27.5349 | 12.25 | 7900 | 13.9814 | 2.4402 | 0.0628 | 2.3282 | 2.3004 | 19.0 |
| 26.4286 | 12.4 | 8000 | 13.8561 | 2.9869 | 0.0654 | 2.8769 | 2.8485 | 19.0 |
| 26.4286 | 12.56 | 8100 | 13.8259 | 1.9609 | 0.0386 | 1.8863 | 1.8846 | 19.0 |
| 26.4286 | 12.71 | 8200 | 13.8127 | 2.0628 | 0.0355 | 1.9915 | 1.9738 | 19.0 |
| 26.4286 | 12.87 | 8300 | 13.7174 | 1.9904 | 0.081 | 1.888 | 1.9069 | 19.0 |
| 26.4286 | 13.02 | 8400 | 13.6308 | 2.1398 | 0.1055 | 2.0204 | 2.0468 | 19.0 |
| 26.108 | 13.18 | 8500 | 13.6490 | 1.8934 | 0.0788 | 1.7942 | 1.8188 | 19.0 |
| 26.108 | 13.33 | 8600 | 13.5996 | 1.8746 | 0.0901 | 1.7441 | 1.8006 | 19.0 |
| 26.108 | 13.49 | 8700 | 13.5394 | 1.7846 | 0.0895 | 1.6648 | 1.7331 | 19.0 |
| 26.108 | 13.64 | 8800 | 13.5368 | 2.1345 | 0.1287 | 1.9808 | 2.0814 | 19.0 |
| 26.108 | 13.8 | 8900 | 13.4793 | 2.5234 | 0.1611 | 2.3289 | 2.4292 | 19.0 |
| 25.4931 | 13.95 | 9000 | 13.3633 | 2.8056 | 0.1953 | 2.5619 | 2.7088 | 19.0 |
| 25.4931 | 14.11 | 9100 | 13.5182 | 3.087 | 0.2192 | 2.8182 | 2.9928 | 19.0 |
| 25.4931 | 14.26 | 9200 | 13.3372 | 2.6353 | 0.175 | 2.4145 | 2.589 | 19.0 |
| 25.4931 | 14.42 | 9300 | 13.2822 | 2.7577 | 0.1905 | 2.5277 | 2.7215 | 19.0 |
| 25.4931 | 14.57 | 9400 | 13.2011 | 3.1891 | 0.2381 | 2.9276 | 3.142 | 19.0 |
| 24.9241 | 14.73 | 9500 | 13.2201 | 2.609 | 0.1683 | 2.4162 | 2.5905 | 19.0 |
| 24.9241 | 14.88 | 9600 | 13.2206 | 3.1083 | 0.2241 | 2.8627 | 3.0606 | 19.0 |
| 24.9241 | 15.04 | 9700 | 13.2157 | 3.6233 | 0.2731 | 3.338 | 3.5642 | 19.0 |
| 24.9241 | 15.19 | 9800 | 13.1195 | 3.1785 | 0.2318 | 2.9449 | 3.1306 | 19.0 |
| 24.9241 | 15.35 | 9900 | 13.0481 | 3.0249 | 0.2192 | 2.7991 | 2.9925 | 19.0 |
| 24.4511 | 15.5 | 10000 | 13.0693 | 3.1189 | 0.2287 | 2.8726 | 3.0669 | 19.0 |
| 24.4511 | 15.66 | 10100 | 12.9204 | 2.6405 | 0.1899 | 2.4337 | 2.61 | 19.0 |
| 24.4511 | 15.81 | 10200 | 12.9200 | 2.9037 | 0.2148 | 2.6775 | 2.8683 | 19.0 |
| 24.4511 | 15.97 | 10300 | 12.9203 | 2.8847 | 0.2034 | 2.6586 | 2.8438 | 19.0 |
| 24.4511 | 16.12 | 10400 | 12.8723 | 2.8195 | 0.1976 | 2.5922 | 2.7803 | 19.0 |
| 23.8949 | 16.28 | 10500 | 12.9749 | 3.2658 | 0.2217 | 2.9905 | 3.2262 | 19.0 |
| 23.8949 | 16.43 | 10600 | 12.7975 | 2.9762 | 0.1844 | 2.7295 | 2.9474 | 19.0 |
| 23.8949 | 16.59 | 10700 | 12.7497 | 2.5496 | 0.1406 | 2.3536 | 2.5269 | 19.0 |
| 23.8949 | 16.74 | 10800 | 12.6485 | 2.5509 | 0.1454 | 2.343 | 2.5182 | 19.0 |
| 23.8949 | 16.9 | 10900 | 12.6574 | 2.1914 | 0.1281 | 2.0113 | 2.1574 | 19.0 |
| 23.4963 | 17.05 | 11000 | 12.6919 | 2.1748 | 0.1299 | 1.9909 | 2.1229 | 19.0 |
| 23.4963 | 17.21 | 11100 | 12.5660 | 2.3751 | 0.1177 | 2.1417 | 2.326 | 19.0 |
| 23.4963 | 17.36 | 11200 | 12.5866 | 2.6893 | 0.1344 | 2.4378 | 2.6318 | 19.0 |
| 23.4963 | 17.52 | 11300 | 12.5427 | 2.5546 | 0.1411 | 2.3175 | 2.5073 | 19.0 |
| 23.4963 | 17.67 | 11400 | 12.5011 | 2.347 | 0.1223 | 2.1322 | 2.3077 | 19.0 |
| 23.1492 | 17.83 | 11500 | 12.5168 | 2.2304 | 0.1141 | 2.0657 | 2.1951 | 19.0 |
| 23.1492 | 17.98 | 11600 | 12.4043 | 2.4485 | 0.1209 | 2.2548 | 2.4114 | 19.0 |
| 23.1492 | 18.14 | 11700 | 12.4192 | 2.0551 | 0.0887 | 1.8996 | 2.0199 | 19.0 |
| 23.1492 | 18.29 | 11800 | 12.3799 | 2.1076 | 0.0932 | 1.9464 | 2.0589 | 19.0 |
| 23.1492 | 18.45 | 11900 | 12.4263 | 2.4136 | 0.1152 | 2.2172 | 2.357 | 19.0 |
| 22.7005 | 18.6 | 12000 | 12.3218 | 2.1197 | 0.1105 | 1.9997 | 2.0873 | 19.0 |
| 22.7005 | 18.76 | 12100 | 12.3297 | 2.1883 | 0.1102 | 2.0414 | 2.1267 | 19.0 |
| 22.7005 | 18.91 | 12200 | 12.3026 | 1.966 | 0.0954 | 1.8387 | 1.9469 | 19.0 |
| 22.7005 | 19.07 | 12300 | 12.3030 | 2.0179 | 0.0955 | 1.8834 | 1.9858 | 19.0 |
| 22.7005 | 19.22 | 12400 | 12.2478 | 1.9549 | 0.0948 | 1.8437 | 1.9092 | 19.0 |
| 22.3178 | 19.38 | 12500 | 12.1803 | 1.6396 | 0.0648 | 1.5296 | 1.6208 | 19.0 |
| 22.3178 | 19.53 | 12600 | 12.1732 | 1.5568 | 0.0769 | 1.4894 | 1.5387 | 19.0 |
| 22.3178 | 19.69 | 12700 | 12.1342 | 1.6861 | 0.0782 | 1.6105 | 1.666 | 19.0 |
| 22.3178 | 19.84 | 12800 | 12.1313 | 2.023 | 0.0965 | 1.9295 | 2.0072 | 19.0 |
| 22.3178 | 20.0 | 12900 | 12.1315 | 1.5878 | 0.0701 | 1.5153 | 1.5467 | 19.0 |
| 21.8344 | 20.16 | 13000 | 12.0611 | 1.6406 | 0.0637 | 1.5665 | 1.6033 | 19.0 |
| 21.8344 | 20.31 | 13100 | 12.0327 | 1.5913 | 0.0544 | 1.5209 | 1.552 | 19.0 |
| 21.8344 | 20.47 | 13200 | 12.0466 | 1.3618 | 0.0494 | 1.3186 | 1.33 | 19.0 |
| 21.8344 | 20.62 | 13300 | 12.0787 | 1.4445 | 0.0451 | 1.4073 | 1.41 | 19.0 |
| 21.8344 | 20.78 | 13400 | 11.9829 | 1.3465 | 0.0494 | 1.3247 | 1.3167 | 19.0 |
| 21.6309 | 20.93 | 13500 | 11.9072 | 1.4165 | 0.0519 | 1.3761 | 1.3839 | 19.0 |
| 21.6309 | 21.09 | 13600 | 11.9261 | 1.3969 | 0.0502 | 1.3606 | 1.3618 | 19.0 |
| 21.6309 | 21.24 | 13700 | 11.8313 | 1.3337 | 0.0337 | 1.2974 | 1.316 | 19.0 |
| 21.6309 | 21.4 | 13800 | 11.7709 | 1.3045 | 0.0371 | 1.2746 | 1.2889 | 19.0 |
| 21.6309 | 21.55 | 13900 | 11.8402 | 1.6106 | 0.0391 | 1.5678 | 1.5697 | 19.0 |
| 21.2262 | 21.71 | 14000 | 11.7132 | 1.3261 | 0.0222 | 1.296 | 1.3051 | 19.0 |
| 21.2262 | 21.86 | 14100 | 11.7206 | 1.41 | 0.0252 | 1.374 | 1.3985 | 19.0 |
| 21.2262 | 22.02 | 14200 | 11.7033 | 1.6231 | 0.0478 | 1.5632 | 1.5851 | 19.0 |
| 21.2262 | 22.17 | 14300 | 11.7385 | 1.8974 | 0.0618 | 1.8339 | 1.8583 | 19.0 |
| 21.2262 | 22.33 | 14400 | 11.6519 | 1.8998 | 0.0541 | 1.8285 | 1.8552 | 19.0 |
| 20.8055 | 22.48 | 14500 | 11.6039 | 1.9561 | 0.0582 | 1.859 | 1.9073 | 19.0 |
| 20.8055 | 22.64 | 14600 | 11.6322 | 1.7731 | 0.0442 | 1.7061 | 1.7303 | 19.0 |
| 20.8055 | 22.79 | 14700 | 11.6046 | 1.8874 | 0.0618 | 1.8083 | 1.8539 | 19.0 |
| 20.8055 | 22.95 | 14800 | 11.5051 | 1.4271 | 0.016 | 1.3996 | 1.4086 | 19.0 |
| 20.8055 | 23.1 | 14900 | 11.5564 | 1.743 | 0.0451 | 1.6787 | 1.727 | 19.0 |
| 20.6263 | 23.26 | 15000 | 11.5024 | 1.9313 | 0.0575 | 1.8357 | 1.887 | 19.0 |
| 20.6263 | 23.41 | 15100 | 11.5281 | 2.082 | 0.0435 | 1.9865 | 2.0327 | 19.0 |
| 20.6263 | 23.57 | 15200 | 11.4223 | 1.9773 | 0.0332 | 1.9038 | 1.9432 | 19.0 |
| 20.6263 | 23.72 | 15300 | 11.4675 | 1.7845 | 0.0831 | 1.6835 | 1.7414 | 19.0 |
| 20.6263 | 23.88 | 15400 | 11.3882 | 2.1183 | 0.0715 | 1.9965 | 2.0725 | 19.0 |
| 20.3154 | 24.03 | 15500 | 11.4197 | 2.4045 | 0.1336 | 2.2302 | 2.3024 | 19.0 |
| 20.3154 | 24.19 | 15600 | 11.3558 | 1.9596 | 0.1196 | 1.8152 | 1.8748 | 19.0 |
| 20.3154 | 24.34 | 15700 | 11.3438 | 2.0931 | 0.111 | 1.9469 | 1.999 | 19.0 |
| 20.3154 | 24.5 | 15800 | 11.3021 | 2.2159 | 0.1257 | 2.0511 | 2.1345 | 19.0 |
| 20.3154 | 24.65 | 15900 | 11.3178 | 2.093 | 0.132 | 1.9083 | 1.9969 | 19.0 |
| 20.0858 | 24.81 | 16000 | 11.2377 | 1.6589 | 0.1129 | 1.5625 | 1.6245 | 19.0 |
| 20.0858 | 24.96 | 16100 | 11.2058 | 1.6667 | 0.0854 | 1.5597 | 1.6223 | 19.0 |
| 20.0858 | 25.12 | 16200 | 11.1602 | 2.0907 | 0.1219 | 1.9297 | 1.9988 | 19.0 |
| 20.0858 | 25.27 | 16300 | 11.1666 | 1.86 | 0.1092 | 1.7398 | 1.7993 | 19.0 |
| 20.0858 | 25.43 | 16400 | 11.1807 | 1.8879 | 0.1818 | 1.7579 | 1.8335 | 19.0 |
| 19.7588 | 25.58 | 16500 | 11.1310 | 2.0377 | 0.1612 | 1.8653 | 1.9538 | 19.0 |
| 19.7588 | 25.74 | 16600 | 11.1577 | 2.1441 | 0.1767 | 1.9546 | 2.0518 | 19.0 |
| 19.7588 | 25.89 | 16700 | 11.0748 | 1.8679 | 0.1892 | 1.7249 | 1.7822 | 19.0 |
| 19.7588 | 26.05 | 16800 | 11.1048 | 2.2775 | 0.2072 | 2.0566 | 2.1521 | 19.0 |
| 19.7588 | 26.2 | 16900 | 11.0498 | 1.8117 | 0.161 | 1.6879 | 1.7357 | 19.0 |
| 19.4627 | 26.36 | 17000 | 11.0435 | 1.7875 | 0.1627 | 1.6626 | 1.7306 | 19.0 |
| 19.4627 | 26.51 | 17100 | 10.9406 | 1.7333 | 0.1645 | 1.6051 | 1.6671 | 19.0 |
| 19.4627 | 26.67 | 17200 | 10.9242 | 1.596 | 0.1426 | 1.4747 | 1.5341 | 19.0 |
| 19.4627 | 26.82 | 17300 | 10.9571 | 1.9874 | 0.2109 | 1.8109 | 1.9061 | 19.0 |
| 19.4627 | 26.98 | 17400 | 10.9265 | 1.6999 | 0.1353 | 1.5574 | 1.6402 | 19.0 |
| 19.2619 | 27.13 | 17500 | 10.8919 | 1.7543 | 0.1709 | 1.587 | 1.6605 | 19.0 |
| 19.2619 | 27.29 | 17600 | 10.8382 | 2.126 | 0.2056 | 1.8609 | 2.0021 | 19.0 |
| 19.2619 | 27.44 | 17700 | 10.8936 | 1.9626 | 0.1726 | 1.7402 | 1.8665 | 19.0 |
| 19.2619 | 27.6 | 17800 | 10.8565 | 1.7668 | 0.1673 | 1.5914 | 1.7099 | 19.0 |
| 19.2619 | 27.75 | 17900 | 10.9047 | 2.0972 | 0.1867 | 1.8519 | 2.0224 | 19.0 |
| 19.0457 | 27.91 | 18000 | 10.7900 | 2.7761 | 0.2904 | 2.4403 | 2.6936 | 19.0 |
| 19.0457 | 28.06 | 18100 | 10.7191 | 2.3652 | 0.2431 | 2.0989 | 2.2767 | 19.0 |
| 19.0457 | 28.22 | 18200 | 10.7462 | 3.3125 | 0.361 | 2.847 | 3.1506 | 19.0 |
| 19.0457 | 28.37 | 18300 | 10.7721 | 2.9247 | 0.3 | 2.5443 | 2.806 | 19.0 |
| 19.0457 | 28.53 | 18400 | 10.7208 | 2.5398 | 0.2812 | 2.2211 | 2.4312 | 19.0 |
| 18.8301 | 28.68 | 18500 | 10.6708 | 2.5902 | 0.281 | 2.2765 | 2.4881 | 19.0 |
| 18.8301 | 28.84 | 18600 | 10.7220 | 2.276 | 0.2061 | 1.9904 | 2.1922 | 19.0 |
| 18.8301 | 28.99 | 18700 | 10.6855 | 2.8678 | 0.3496 | 2.52 | 2.751 | 19.0 |
| 18.8301 | 29.15 | 18800 | 10.6550 | 2.5232 | 0.2724 | 2.2108 | 2.4314 | 19.0 |
| 18.8301 | 29.3 | 18900 | 10.6488 | 2.5629 | 0.2203 | 2.2361 | 2.4261 | 19.0 |
| 18.5872 | 29.46 | 19000 | 10.6123 | 2.5052 | 0.1923 | 2.1381 | 2.3821 | 19.0 |
| 18.5872 | 29.61 | 19100 | 10.6105 | 3.7779 | 0.3653 | 3.2404 | 3.5759 | 19.0 |
| 18.5872 | 29.77 | 19200 | 10.5823 | 3.8282 | 0.3743 | 3.2645 | 3.6077 | 19.0 |
| 18.5872 | 29.92 | 19300 | 10.5606 | 3.0976 | 0.277 | 2.6041 | 2.8838 | 19.0 |
| 18.5872 | 30.08 | 19400 | 10.5846 | 3.638 | 0.3482 | 3.0804 | 3.4294 | 19.0 |
| 18.2839 | 30.23 | 19500 | 10.4722 | 2.6173 | 0.2326 | 2.2268 | 2.4656 | 19.0 |
| 18.2839 | 30.39 | 19600 | 10.5211 | 3.5085 | 0.3377 | 2.9751 | 3.2889 | 19.0 |
| 18.2839 | 30.54 | 19700 | 10.4735 | 2.4781 | 0.2097 | 2.1099 | 2.3338 | 19.0 |
| 18.2839 | 30.7 | 19800 | 10.4545 | 3.1459 | 0.3022 | 2.6844 | 2.9559 | 19.0 |
| 18.2839 | 30.85 | 19900 | 10.4525 | 3.6095 | 0.3637 | 3.0873 | 3.3886 | 19.0 |
| 18.1352 | 31.01 | 20000 | 10.4409 | 4.0556 | 0.4621 | 3.3857 | 3.7778 | 19.0 |
| 18.1352 | 31.16 | 20100 | 10.4132 | 3.8346 | 0.3863 | 3.2323 | 3.6266 | 19.0 |
| 18.1352 | 31.32 | 20200 | 10.4468 | 2.3736 | 0.1977 | 2.0195 | 2.236 | 19.0 |
| 18.1352 | 31.47 | 20300 | 10.3896 | 3.6954 | 0.3512 | 3.1402 | 3.4667 | 19.0 |
| 18.1352 | 31.63 | 20400 | 10.3546 | 3.5158 | 0.3558 | 3.0575 | 3.3116 | 19.0 |
| 17.9834 | 31.78 | 20500 | 10.3632 | 3.179 | 0.3374 | 2.7634 | 2.9846 | 19.0 |
| 17.9834 | 31.94 | 20600 | 10.3168 | 3.9121 | 0.4012 | 3.3812 | 3.687 | 19.0 |
| 17.9834 | 32.09 | 20700 | 10.2772 | 3.6148 | 0.3667 | 3.1059 | 3.3541 | 19.0 |
| 17.9834 | 32.25 | 20800 | 10.3173 | 3.1448 | 0.2924 | 2.6948 | 2.9338 | 19.0 |
| 17.9834 | 32.4 | 20900 | 10.2154 | 2.4611 | 0.1922 | 2.1597 | 2.3288 | 19.0 |
| 17.6192 | 32.56 | 21000 | 10.2957 | 3.3177 | 0.3762 | 2.8085 | 3.0595 | 19.0 |
| 17.6192 | 32.71 | 21100 | 10.2064 | 3.4663 | 0.3819 | 3.0229 | 3.2201 | 19.0 |
| 17.6192 | 32.87 | 21200 | 10.2235 | 3.245 | 0.3179 | 2.7618 | 3.0066 | 19.0 |
| 17.6192 | 33.02 | 21300 | 10.2193 | 2.5572 | 0.2775 | 2.216 | 2.3892 | 19.0 |
| 17.6192 | 33.18 | 21400 | 10.2467 | 3.4873 | 0.3934 | 3.02 | 3.2701 | 19.0 |
| 17.5532 | 33.33 | 21500 | 10.2378 | 2.8087 | 0.3049 | 2.4001 | 2.6218 | 19.0 |
| 17.5532 | 33.49 | 21600 | 10.2086 | 3.8967 | 0.4801 | 3.3678 | 3.603 | 19.0 |
| 17.5532 | 33.64 | 21700 | 10.2384 | 2.6534 | 0.3239 | 2.3276 | 2.4692 | 19.0 |
| 17.5532 | 33.8 | 21800 | 10.1929 | 2.6025 | 0.2845 | 2.2653 | 2.4507 | 19.0 |
| 17.5532 | 33.95 | 21900 | 10.1016 | 3.3244 | 0.377 | 2.8311 | 3.0784 | 19.0 |
| 17.3872 | 34.11 | 22000 | 10.1407 | 3.4245 | 0.4024 | 3.044 | 3.1865 | 19.0 |
| 17.3872 | 34.26 | 22100 | 10.0760 | 3.9251 | 0.4272 | 3.4064 | 3.6497 | 19.0 |
| 17.3872 | 34.42 | 22200 | 10.0998 | 3.3034 | 0.3438 | 2.8977 | 3.1141 | 19.0 |
| 17.3872 | 34.57 | 22300 | 10.0834 | 2.4967 | 0.266 | 2.2301 | 2.3647 | 19.0 |
| 17.3872 | 34.73 | 22400 | 9.9902 | 4.0828 | 0.4867 | 3.5482 | 3.7861 | 19.0 |
| 17.1744 | 34.88 | 22500 | 10.0366 | 3.5772 | 0.4377 | 3.1153 | 3.3199 | 19.0 |
| 17.1744 | 35.04 | 22600 | 10.0299 | 3.5342 | 0.433 | 3.0501 | 3.2176 | 19.0 |
| 17.1744 | 35.19 | 22700 | 9.9912 | 3.7754 | 0.4445 | 3.3191 | 3.502 | 19.0 |
| 17.1744 | 35.35 | 22800 | 9.9580 | 4.5086 | 0.5514 | 3.8986 | 4.1987 | 19.0 |
| 17.1744 | 35.5 | 22900 | 9.9676 | 3.526 | 0.3942 | 3.0859 | 3.3082 | 19.0 |
| 17.0687 | 35.66 | 23000 | 9.9874 | 3.7058 | 0.5139 | 3.2353 | 3.4611 | 19.0 |
| 17.0687 | 35.81 | 23100 | 9.9536 | 3.6588 | 0.4552 | 3.1591 | 3.3554 | 19.0 |
| 17.0687 | 35.97 | 23200 | 9.8948 | 3.6279 | 0.3933 | 3.1403 | 3.3426 | 19.0 |
| 17.0687 | 36.12 | 23300 | 9.8397 | 3.8101 | 0.4971 | 3.3152 | 3.5133 | 19.0 |
| 17.0687 | 36.28 | 23400 | 9.8995 | 3.3201 | 0.4209 | 2.9101 | 3.0903 | 19.0 |
| 16.7686 | 36.43 | 23500 | 9.9085 | 4.0108 | 0.6389 | 3.5055 | 3.7286 | 19.0 |
| 16.7686 | 36.59 | 23600 | 9.8688 | 3.6051 | 0.5164 | 3.1651 | 3.3781 | 19.0 |
| 16.7686 | 36.74 | 23700 | 9.8673 | 4.4987 | 0.6051 | 3.8789 | 4.1868 | 19.0 |
| 16.7686 | 36.9 | 23800 | 9.8848 | 3.6926 | 0.5635 | 3.1681 | 3.3902 | 19.0 |
| 16.7686 | 37.05 | 23900 | 9.8497 | 3.518 | 0.4283 | 3.1159 | 3.3112 | 19.0 |
| 16.7432 | 37.21 | 24000 | 9.8044 | 3.3369 | 0.3772 | 2.9784 | 3.147 | 19.0 |
| 16.7432 | 37.36 | 24100 | 9.7768 | 3.5862 | 0.3819 | 3.1273 | 3.3535 | 19.0 |
| 16.7432 | 37.52 | 24200 | 9.7536 | 4.1823 | 0.5884 | 3.645 | 3.8843 | 19.0 |
| 16.7432 | 37.67 | 24300 | 9.7953 | 4.3981 | 0.6441 | 3.7941 | 4.0623 | 19.0 |
| 16.7432 | 37.83 | 24400 | 9.6742 | 3.7833 | 0.4755 | 3.3516 | 3.5543 | 19.0 |
| 16.5714 | 37.98 | 24500 | 9.7946 | 3.3839 | 0.495 | 3.0021 | 3.156 | 19.0 |
| 16.5714 | 38.14 | 24600 | 9.7544 | 4.3873 | 0.6486 | 3.8188 | 4.0653 | 19.0 |
| 16.5714 | 38.29 | 24700 | 9.7586 | 3.4403 | 0.4756 | 3.0402 | 3.2405 | 19.0 |
| 16.5714 | 38.45 | 24800 | 9.7895 | 3.6822 | 0.6247 | 3.2612 | 3.4746 | 19.0 |
| 16.5714 | 38.6 | 24900 | 9.6964 | 3.8743 | 0.6209 | 3.4159 | 3.6051 | 19.0 |
| 16.3393 | 38.76 | 25000 | 9.7190 | 4.1508 | 0.635 | 3.5925 | 3.8753 | 19.0 |
| 16.3393 | 38.91 | 25100 | 9.6435 | 3.6755 | 0.4777 | 3.268 | 3.4572 | 19.0 |
| 16.3393 | 39.07 | 25200 | 9.6390 | 2.9478 | 0.4049 | 2.6531 | 2.7782 | 19.0 |
| 16.3393 | 39.22 | 25300 | 9.6300 | 2.9973 | 0.3897 | 2.6662 | 2.7943 | 19.0 |
| 16.3393 | 39.38 | 25400 | 9.6229 | 3.6726 | 0.4182 | 3.2207 | 3.4595 | 19.0 |
| 16.3076 | 39.53 | 25500 | 9.6392 | 2.9691 | 0.3692 | 2.6709 | 2.8182 | 19.0 |
| 16.3076 | 39.69 | 25600 | 9.5978 | 2.8167 | 0.3437 | 2.593 | 2.7155 | 19.0 |
| 16.3076 | 39.84 | 25700 | 9.6111 | 3.5135 | 0.5453 | 3.1415 | 3.3042 | 19.0 |
| 16.3076 | 40.0 | 25800 | 9.6118 | 3.459 | 0.4963 | 3.1351 | 3.2809 | 19.0 |
| 16.3076 | 40.16 | 25900 | 9.5994 | 3.5735 | 0.539 | 3.2556 | 3.3904 | 19.0 |
| 16.0684 | 40.31 | 26000 | 9.5526 | 3.3388 | 0.4689 | 2.9753 | 3.1562 | 19.0 |
| 16.0684 | 40.47 | 26100 | 9.5365 | 3.0882 | 0.392 | 2.8072 | 2.9556 | 19.0 |
| 16.0684 | 40.62 | 26200 | 9.5571 | 3.0022 | 0.4109 | 2.7108 | 2.8575 | 19.0 |
| 16.0684 | 40.78 | 26300 | 9.5240 | 3.506 | 0.5734 | 3.1577 | 3.3378 | 19.0 |
| 16.0684 | 40.93 | 26400 | 9.4913 | 3.5936 | 0.5165 | 3.2452 | 3.4134 | 19.0 |
| 15.9425 | 41.09 | 26500 | 9.5297 | 3.7802 | 0.6862 | 3.4061 | 3.5436 | 19.0 |
| 15.9425 | 41.24 | 26600 | 9.4657 | 3.8433 | 0.6105 | 3.4621 | 3.638 | 19.0 |
| 15.9425 | 41.4 | 26700 | 9.5049 | 3.5822 | 0.6462 | 3.231 | 3.3745 | 19.0 |
| 15.9425 | 41.55 | 26800 | 9.4739 | 2.9668 | 0.4426 | 2.7345 | 2.8134 | 19.0 |
| 15.9425 | 41.71 | 26900 | 9.4868 | 3.7458 | 0.6934 | 3.3708 | 3.5492 | 19.0 |
| 15.7779 | 41.86 | 27000 | 9.4683 | 3.5254 | 0.6006 | 3.1629 | 3.3011 | 19.0 |
| 15.7779 | 42.02 | 27100 | 9.4108 | 4.2731 | 0.7412 | 3.8236 | 4.0171 | 19.0 |
| 15.7779 | 42.17 | 27200 | 9.3994 | 3.5014 | 0.5738 | 3.1525 | 3.3306 | 19.0 |
| 15.7779 | 42.33 | 27300 | 9.3760 | 3.4929 | 0.4954 | 3.1402 | 3.3028 | 19.0 |
| 15.7779 | 42.48 | 27400 | 9.4201 | 4.2777 | 0.7152 | 3.7943 | 4.0349 | 19.0 |
| 15.7238 | 42.64 | 27500 | 9.3913 | 3.6489 | 0.6371 | 3.2903 | 3.4528 | 19.0 |
| 15.7238 | 42.79 | 27600 | 9.4269 | 3.5269 | 0.6042 | 3.2049 | 3.3528 | 19.0 |
| 15.7238 | 42.95 | 27700 | 9.3847 | 3.4735 | 0.5963 | 3.1522 | 3.2796 | 19.0 |
| 15.7238 | 43.1 | 27800 | 9.3474 | 3.8327 | 0.6428 | 3.406 | 3.5698 | 19.0 |
| 15.7238 | 43.26 | 27900 | 9.3293 | 3.5475 | 0.6313 | 3.1725 | 3.3367 | 19.0 |
| 15.5108 | 43.41 | 28000 | 9.3802 | 4.249 | 0.7997 | 3.7924 | 3.9849 | 19.0 |
| 15.5108 | 43.57 | 28100 | 9.2588 | 3.4476 | 0.4676 | 3.1758 | 3.2993 | 19.0 |
| 15.5108 | 43.72 | 28200 | 9.3447 | 4.0267 | 0.7081 | 3.6208 | 3.7957 | 19.0 |
| 15.5108 | 43.88 | 28300 | 9.2853 | 4.0105 | 0.7799 | 3.5848 | 3.7619 | 19.0 |
| 15.5108 | 44.03 | 28400 | 9.2753 | 3.1833 | 0.4678 | 2.9068 | 3.0168 | 19.0 |
| 15.4004 | 44.19 | 28500 | 9.2345 | 3.6778 | 0.5955 | 3.3212 | 3.4724 | 19.0 |
| 15.4004 | 44.34 | 28600 | 9.3130 | 3.9958 | 0.6892 | 3.5871 | 3.772 | 19.0 |
| 15.4004 | 44.5 | 28700 | 9.2984 | 4.1868 | 0.696 | 3.7194 | 3.9197 | 19.0 |
| 15.4004 | 44.65 | 28800 | 9.2722 | 3.8848 | 0.5914 | 3.5038 | 3.7022 | 19.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_it_wav2vec2_s211 | jonatasgrosman | 2022-07-08T18:47:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:46:52Z | ---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_wav2vec2_s211
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_it_wav2vec2_s692 | jonatasgrosman | 2022-07-08T18:40:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:39:44Z | ---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_wav2vec2_s692
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-it_s621 | jonatasgrosman | 2022-07-08T18:34:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:33:37Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-it_s621
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-it_s73 | jonatasgrosman | 2022-07-08T18:24:40Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:24:16Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-it_s73
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_r-wav2vec2_s303 | jonatasgrosman | 2022-07-08T18:20:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:20:08Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_r-wav2vec2_s303
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_r-wav2vec2_s201 | jonatasgrosman | 2022-07-08T18:16:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:15:40Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_r-wav2vec2_s201
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_r-wav2vec2_s911 | jonatasgrosman | 2022-07-08T18:12:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:12:33Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_r-wav2vec2_s911
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_xls-r_s42 | jonatasgrosman | 2022-07-08T18:06:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T18:05:28Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_xls-r_s42
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_unispeech-sat_s635 | jonatasgrosman | 2022-07-08T17:58:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:57:49Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_unispeech-sat_s635
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
ndhieunguyen/ppo-SpaceInvadersNoFrameskip-v4 | ndhieunguyen | 2022-07-08T17:57:39Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-04-26T15:34:31Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 688.00 +/- 388.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga infinitejoy -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga infinitejoy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jonatasgrosman/exp_w2v2t_ja_unispeech-sat_s884 | jonatasgrosman | 2022-07-08T17:51:35Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:51:10Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_unispeech-sat_s884
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-nl_s770 | jonatasgrosman | 2022-07-08T17:48:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:47:43Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-nl_s770
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-nl_s287 | jonatasgrosman | 2022-07-08T17:44:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:43:51Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-nl_s287
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-nl_s682 | jonatasgrosman | 2022-07-08T17:41:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:40:45Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-nl_s682
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-es_s673 | jonatasgrosman | 2022-07-08T17:33:40Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:33:01Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-es_s673
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-fr_s543 | jonatasgrosman | 2022-07-08T17:26:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:26:03Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-fr_s543
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-fr_s458 | jonatasgrosman | 2022-07-08T17:22:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:22:19Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-fr_s458
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-fr_s368 | jonatasgrosman | 2022-07-08T17:18:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:17:49Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-fr_s368
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
infinitejoy/dqn-SpaceInvadersNoFrameskip-v4 | infinitejoy | 2022-07-08T17:14:02Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-08T17:05:23Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 688.00 +/- 388.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga infinitejoy -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga infinitejoy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jonatasgrosman/exp_w2v2t_ja_unispeech-ml_s886 | jonatasgrosman | 2022-07-08T17:06:46Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:06:23Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_unispeech-ml_s886
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_wavlm_s664 | jonatasgrosman | 2022-07-08T17:03:01Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T17:02:32Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_wavlm_s664
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_vp-sv_s570 | jonatasgrosman | 2022-07-08T16:43:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T16:42:50Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_vp-sv_s570
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Nitika/distilbert-base-uncased-finetuned-squad-d5716d28 | Nitika | 2022-07-08T16:36:38Z | 0 | 0 | null | [
"pytorch",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
]
| question-answering | 2022-07-08T16:36:27Z | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
jonatasgrosman/exp_w2v2t_ja_hubert_s732 | jonatasgrosman | 2022-07-08T16:24:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T16:24:16Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_hubert_s732
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_unispeech_s947 | jonatasgrosman | 2022-07-08T16:21:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T16:21:10Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_unispeech_s947
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_unispeech_s253 | jonatasgrosman | 2022-07-08T16:18:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T16:17:38Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_unispeech_s253
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_unispeech_s569 | jonatasgrosman | 2022-07-08T16:14:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T16:14:24Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_unispeech_s569
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_xlsr-53_s781 | jonatasgrosman | 2022-07-08T16:08:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T16:08:19Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_xlsr-53_s781
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_wav2vec2_s727 | jonatasgrosman | 2022-07-08T15:53:02Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:52:34Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_wav2vec2_s727
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_wav2vec2_s895 | jonatasgrosman | 2022-07-08T15:49:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:49:04Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_wav2vec2_s895
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ja_wav2vec2_s834 | jonatasgrosman | 2022-07-08T15:46:20Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:45:39Z | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_wav2vec2_s834
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Rocketknight1/bert-dummy-seq | Rocketknight1 | 2022-07-08T15:45:02Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-08T15:18:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-dummy-seq
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-dummy-seq
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.9.1
- Datasets 2.3.3.dev0
- Tokenizers 0.11.0
|
jonatasgrosman/exp_w2v2t_th_vp-it_s819 | jonatasgrosman | 2022-07-08T15:31:43Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:31:17Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-it_s819
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_vp-it_s259 | jonatasgrosman | 2022-07-08T15:28:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:28:09Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-it_s259
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
tfshaman/distilbert-base-uncased-distilled-clinc | tfshaman | 2022-07-08T15:19:17Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-08T14:52:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.8264516129032258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5565
- Accuracy: 0.8265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.2743 | 1.0 | 318 | 2.5809 | 0.7310 |
| 2.2148 | 2.0 | 636 | 1.7909 | 0.8071 |
| 1.7065 | 3.0 | 954 | 1.5565 | 0.8265 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_th_r-wav2vec2_s805 | jonatasgrosman | 2022-07-08T15:18:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:17:48Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_r-wav2vec2_s805
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_xls-r_s590 | jonatasgrosman | 2022-07-08T15:14:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:14:26Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_xls-r_s590
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_xls-r_s879 | jonatasgrosman | 2022-07-08T15:11:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:11:20Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_xls-r_s879
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_xls-r_s625 | jonatasgrosman | 2022-07-08T15:07:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T15:07:26Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_xls-r_s625
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
dminiotas05/distilbert-base-uncased-finetuned-ft650_10class | dminiotas05 | 2022-07-08T14:58:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-08T14:33:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft650_10class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft650_10class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9674
- Accuracy: 0.2207
- F1: 0.2002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.1088 | 1.0 | 188 | 2.0460 | 0.1807 | 0.1324 |
| 1.9628 | 2.0 | 376 | 1.9867 | 0.2173 | 0.1821 |
| 1.8966 | 3.0 | 564 | 1.9693 | 0.2193 | 0.1936 |
| 1.8399 | 4.0 | 752 | 1.9674 | 0.2207 | 0.2002 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_th_unispeech-sat_s658 | jonatasgrosman | 2022-07-08T14:57:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T14:56:29Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_unispeech-sat_s658
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_vp-nl_s947 | jonatasgrosman | 2022-07-08T14:53:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T14:52:36Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-nl_s947
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
micheljperez/dqn-SpaceInvadersNoFrameskip-v4-2 | micheljperez | 2022-07-08T14:47:15Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-05T13:17:28Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 1092.00 +/- 250.80
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga micheljperez -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga micheljperez
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.07817),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.3571),
('learning_starts', 100000),
('n_timesteps', 10000000),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Zengwei/icefall-asr-librispeech-conv-emformer-transducer-stateless2-larger-latency-2022-07-06 | Zengwei | 2022-07-08T14:40:32Z | 0 | 0 | null | [
"tensorboard",
"region:us"
]
| null | 2022-07-06T08:44:45Z | # Introduction
See https://github.com/k2-fsa/icefall/pull/440
This model use the following setup:
* length of chunk is 64 frames (i.e., 0.64s)
* length of right context is 16 frames (i.e., 0.16s)
|
Nonzerophilip/bert-finetuned-ner_swedish_small_set_health_and_prices | Nonzerophilip | 2022-07-08T14:01:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-08T10:53:18Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner_swedish_small_set_health_and_prices
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_swedish_small_set_health_and_prices
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-ner](https://huggingface.co/KBLab/bert-base-swedish-cased-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0942
- Precision: 0.7709
- Recall: 0.8118
- F1: 0.7908
- Accuracy: 0.9741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 250 | 0.1310 | 0.6116 | 0.7471 | 0.6726 | 0.9578 |
| 0.1583 | 2.0 | 500 | 0.0939 | 0.7560 | 0.8020 | 0.7783 | 0.9737 |
| 0.1583 | 3.0 | 750 | 0.0942 | 0.7709 | 0.8118 | 0.7908 | 0.9741 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
malteos/gpt2-xl-german-covid-19 | malteos | 2022-07-08T13:48:32Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-08T13:14:23Z | ---
license: mit
language: de
widget:
- text: "Noch Wochen nach einer Erkrankung an COVID-19 können "
---
# German Covid-19 GPT2-XL (1.5B)
- Covid-19 specific version of [`malteos/gpt2-xl-wechsel-german`](https://huggingface.co/malteos/gpt2-xl-wechsel-german)
- Fine-tuned on 2 GB text from OSCAR filtered for covid related terms.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='malteos/gpt2-xl-german-covid-19')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
## License
MIT |
jonatasgrosman/exp_w2v2t_th_vp-es_s26 | jonatasgrosman | 2022-07-08T13:44:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T12:25:24Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-es_s26
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Guillaume63/Reinforce-cartpole | Guillaume63 | 2022-07-08T13:06:18Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-06T12:59:13Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
ramonzaca/testpyramidsrnd | ramonzaca | 2022-07-08T12:16:14Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2022-07-08T12:16:09Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: ramonzaca/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dminiotas05/distilbert-base-uncased-finetuned-ft650_6class | dminiotas05 | 2022-07-08T12:11:05Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-08T11:46:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft650_6class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft650_6class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4555
- Accuracy: 0.3707
- F1: 0.3625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.5838 | 1.0 | 188 | 1.5235 | 0.3253 | 0.2947 |
| 1.4521 | 2.0 | 376 | 1.4744 | 0.3467 | 0.3234 |
| 1.3838 | 3.0 | 564 | 1.4565 | 0.358 | 0.3483 |
| 1.323 | 4.0 | 752 | 1.4555 | 0.3707 | 0.3625 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jourlin/wiki2json | jourlin | 2022-07-08T11:46:44Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-07-08T06:58:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: wiki2json
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 4.8968
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki2json
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6848
- Bleu: 4.8968
- Gen Len: 17.6362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 1.9187 | 1.0 | 3178 | 1.6848 | 4.8968 | 17.6362 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_th_unispeech-ml_s256 | jonatasgrosman | 2022-07-08T11:28:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T11:27:41Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_unispeech-ml_s256
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_wavlm_s904 | jonatasgrosman | 2022-07-08T11:25:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T11:23:56Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_wavlm_s904
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_wavlm_s847 | jonatasgrosman | 2022-07-08T11:21:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T11:20:15Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_wavlm_s847
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_vp-sv_s884 | jonatasgrosman | 2022-07-08T11:04:20Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T11:03:49Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-sv_s884
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_vp-sv_s635 | jonatasgrosman | 2022-07-08T11:01:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T11:00:49Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-sv_s635
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_vp-sv_s946 | jonatasgrosman | 2022-07-08T10:58:13Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:57:48Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-sv_s946
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_hubert_s533 | jonatasgrosman | 2022-07-08T10:52:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:51:52Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_hubert_s533
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_unispeech_s131 | jonatasgrosman | 2022-07-08T10:45:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:45:06Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_unispeech_s131
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_unispeech_s328 | jonatasgrosman | 2022-07-08T10:39:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:38:31Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_unispeech_s328
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_xlsr-53_s201 | jonatasgrosman | 2022-07-08T10:31:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:30:49Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_xlsr-53_s201
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_xlsr-53_s711 | jonatasgrosman | 2022-07-08T10:27:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:26:54Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_xlsr-53_s711
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_vp-100k_s630 | jonatasgrosman | 2022-07-08T10:24:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:23:54Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-100k_s630
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_vp-100k_s497 | jonatasgrosman | 2022-07-08T10:21:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:20:58Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_vp-100k_s497
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_th_wav2vec2_s664 | jonatasgrosman | 2022-07-08T10:06:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T10:06:28Z | ---
language:
- th
license: apache-2.0
tags:
- automatic-speech-recognition
- th
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_th_wav2vec2_s664
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
swtx/ernie-gram-chinese | swtx | 2022-07-08T09:44:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2010.12148",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-07-08T09:09:46Z | ---
language: chinese
---
# ERNIE-Gram-chinese
## Introduction
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding
More detail: https://arxiv.org/abs/2010.12148
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|ernie-gram-chinese| Chinese |Layer:12, Hidden:768, Heads:12|
This released Pytorch model is converted from the officially released PaddlePaddle ERNIE model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE
- Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("swtx/ernie-gram-chinese")
model = AutoModel.from_pretrained("swtx/ernie-gram-chinese")
``` |
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s44 | jonatasgrosman | 2022-07-08T09:36:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T09:35:33Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_r-wav2vec2_s44
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s93 | jonatasgrosman | 2022-07-08T09:28:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T09:28:09Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_r-wav2vec2_s93
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
epsil/testpyramidsrnd | epsil | 2022-07-08T09:27:21Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2022-07-08T09:27:16Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: epsil/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s863 | jonatasgrosman | 2022-07-08T09:19:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T09:18:31Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_r-wav2vec2_s863
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_unispeech-sat_s459 | jonatasgrosman | 2022-07-08T08:46:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T08:46:09Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_unispeech-sat_s459
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_unispeech-sat_s456 | jonatasgrosman | 2022-07-08T08:26:50Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T08:26:01Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_unispeech-sat_s456
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Shenghao1993/distilbert-base-uncased-finetuned-clinc | Shenghao1993 | 2022-07-08T08:22:36Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-06T15:20:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7711
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2830 | 0.7426 |
| 3.785 | 2.0 | 636 | 1.8728 | 0.8410 |
| 3.785 | 3.0 | 954 | 1.1555 | 0.8913 |
| 1.6902 | 4.0 | 1272 | 0.8530 | 0.9126 |
| 0.901 | 5.0 | 1590 | 0.7711 | 0.9174 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_en_vp-nl_s281 | jonatasgrosman | 2022-07-08T08:09:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T08:08:43Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_vp-nl_s281
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_vp-es_s186 | jonatasgrosman | 2022-07-08T07:54:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T07:53:28Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_vp-es_s186
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_vp-es_s474 | jonatasgrosman | 2022-07-08T07:45:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T07:44:40Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_vp-es_s474
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
ClassCat/roberta-base-french | ClassCat | 2022-07-08T07:34:58Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"fr",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-04T17:58:21Z | ---
language: fr
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "Je vais à la <mask>."
- text: "J'aime le <mask>."
- text: "J'ai ouvert la <mask>."
- text: "Je m'appelle <mask>."
- text: "J'ai beaucoup d'<mask>."
---
## RoBERTa French base model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses RoBERTa base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/fr](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bfr) (French Wikipedia)
* Subset of [CC-100/fr](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-base-french')
unmasker("Je vais à la <mask>.")
``` |
jonatasgrosman/exp_w2v2t_en_vp-fr_s691 | jonatasgrosman | 2022-07-08T07:20:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T07:20:01Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_vp-fr_s691
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_unispeech-ml_s377 | jonatasgrosman | 2022-07-08T06:52:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T06:52:07Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_unispeech-ml_s377
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_wavlm_s990 | jonatasgrosman | 2022-07-08T06:48:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T06:47:43Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_wavlm_s990
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_wavlm_s461 | jonatasgrosman | 2022-07-08T06:40:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T06:39:25Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_wavlm_s461
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_wavlm_s767 | jonatasgrosman | 2022-07-08T06:33:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T06:32:43Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_wavlm_s767
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_no-pretraining_s883 | jonatasgrosman | 2022-07-08T06:16:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T06:16:14Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_no-pretraining_s883
Fine-tuned randomly initialized wav2vec2 model for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_vp-sv_s438 | jonatasgrosman | 2022-07-08T06:11:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T06:11:10Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_vp-sv_s438
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_hubert_s875 | jonatasgrosman | 2022-07-08T05:46:21Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T05:45:44Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_hubert_s875
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_unispeech_s227 | jonatasgrosman | 2022-07-08T05:36:00Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T05:35:18Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_unispeech_s227
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
phyous/q-Taxi-v3-2 | phyous | 2022-07-08T05:32:54Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-08T05:27:28Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-2
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="phyous/q-Taxi-v3-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
jonatasgrosman/exp_w2v2t_en_unispeech_s870 | jonatasgrosman | 2022-07-08T05:31:32Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T05:30:42Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_unispeech_s870
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
phyous/q-Taxi-v3 | phyous | 2022-07-08T05:12:47Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-08T05:12:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.44 +/- 2.74
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="phyous/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
jonatasgrosman/exp_w2v2t_en_xlsr-53_s870 | jonatasgrosman | 2022-07-08T05:07:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T05:06:55Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_xlsr-53_s870
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
ChauNguyen23/phobert-base-finetuned-imdb | ChauNguyen23 | 2022-07-08T05:03:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-08T04:47:50Z | ---
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: phobert-base-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-base-finetuned-imdb
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3266 | 1.0 | 157 | 2.7949 |
| 2.9162 | 2.0 | 314 | 2.6515 |
| 2.8177 | 3.0 | 471 | 2.6452 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_en_vp-100k_s364 | jonatasgrosman | 2022-07-08T04:56:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T04:56:25Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_vp-100k_s364
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
rserenity/shuukobot | rserenity | 2022-07-08T04:38:26Z | 0 | 0 | null | [
"tensorboard",
"text-generation",
"region:us"
]
| text-generation | 2022-07-08T02:58:22Z | ---
tags:
- text-generation
--- |
jonatasgrosman/exp_w2v2t_en_wav2vec2_s203 | jonatasgrosman | 2022-07-08T04:24:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T04:23:34Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_wav2vec2_s203
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_en_wav2vec2_s924 | jonatasgrosman | 2022-07-08T04:12:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-08T03:56:41Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_wav2vec2_s924
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
okho0653/Bio_ClinicalBERT-zero-shot-tokenizer-truncation-sentiment-model | okho0653 | 2022-07-08T03:54:48Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-08T01:09:10Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Bio_ClinicalBERT-zero-shot-tokenizer-truncation-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT-zero-shot-tokenizer-truncation-sentiment-model
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits