repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kiri1701/bert-base-uncased-issues-128-issues-128 | kiri1701 | bert | 11 | 7 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,929 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0986 | 1.0 | 291 | 1.6929 |
| 1.6401 | 2.0 | 582 | 1.4304 |
| 1.4881 | 3.0 | 873 | 1.3916 |
| 1.4 | 4.0 | 1164 | 1.3796 |
| 1.3416 | 5.0 | 1455 | 1.2012 |
| 1.2807 | 6.0 | 1746 | 1.2733 |
| 1.2396 | 7.0 | 2037 | 1.2646 |
| 1.1993 | 8.0 | 2328 | 1.2098 |
| 1.1661 | 9.0 | 2619 | 1.1862 |
| 1.1406 | 10.0 | 2910 | 1.2223 |
| 1.1294 | 11.0 | 3201 | 1.2056 |
| 1.1042 | 12.0 | 3492 | 1.1655 |
| 1.0827 | 13.0 | 3783 | 1.2525 |
| 1.0738 | 14.0 | 4074 | 1.1685 |
| 1.0626 | 15.0 | 4365 | 1.1182 |
| 1.0629 | 16.0 | 4656 | 1.2456 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 42056d8faddc7c3fcaba09cd68cce5fd |
AppInApp/e9d98cfd-ad84-4a2c-b36b-0a35a9200a7e | AppInApp | null | 24 | 12 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 568 | false | ###
Sample pictures of:
sdcid (use that on your prompt)

| d6a3bd885fe709f79b7937d59215f92e |
BellaAndBria/distilbert-base-uncased-finetuned-emotion | BellaAndBria | distilbert | 14 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,417 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1611
- Accuracy: 0.9425
- F1: 0.9424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1358 | 1.0 | 250 | 0.1765 | 0.9345 | 0.9340 |
| 0.0885 | 2.0 | 500 | 0.1588 | 0.937 | 0.9371 |
| 0.0727 | 3.0 | 750 | 0.1611 | 0.9425 | 0.9424 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 48e1fe637d941fb96a76a905b05fc306 |
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s853 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'en'] | false | true | true | 498 | false | # exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s853
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| c9405a36166b377c10c83ead7f9d6dd5 |
Jethuestad/dat259-wav2vec2-en | Jethuestad | wav2vec2 | 11 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice_1_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,879 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dat259-wav2vec2-en
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_1_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5042
- Wer: 0.5793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2531 | 1.82 | 200 | 3.0566 | 1.0 |
| 2.1194 | 3.64 | 400 | 1.6370 | 0.7706 |
| 0.6464 | 5.45 | 600 | 1.3950 | 0.6694 |
| 0.3891 | 7.27 | 800 | 1.4443 | 0.6525 |
| 0.2783 | 9.09 | 1000 | 1.4309 | 0.6152 |
| 0.2088 | 10.91 | 1200 | 1.3592 | 0.5960 |
| 0.1685 | 12.73 | 1400 | 1.4690 | 0.6031 |
| 0.1397 | 14.55 | 1600 | 1.4691 | 0.5819 |
| 0.1209 | 16.36 | 1800 | 1.5004 | 0.5840 |
| 0.1122 | 18.18 | 2000 | 1.5069 | 0.5806 |
| 0.1025 | 20.0 | 2200 | 1.5042 | 0.5793 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
| 0ff88d5a9f34baf4fc51fa9920f34b8e |
muhtasham/small-mlm-glue-sst2-custom-tokenizer | muhtasham | bert | 12 | 0 | transformers | 1 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,615 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-sst2-custom-tokenizer
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.2931 | 0.4 | 500 | 6.8821 |
| 6.7316 | 0.8 | 1000 | 6.5537 |
| 6.3512 | 1.2 | 1500 | 6.5318 |
| 6.3054 | 1.6 | 2000 | 6.6485 |
| 6.2502 | 2.0 | 2500 | 6.5860 |
| 6.0353 | 2.4 | 3000 | 6.4283 |
| 5.9789 | 2.8 | 3500 | 6.5207 |
| 5.9634 | 3.2 | 4000 | 6.4901 |
| 5.9423 | 3.6 | 4500 | 6.5283 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 638f765bdb1d6bf90175fd06be4db509 |
alanoix/whisper-small-br | alanoix | whisper | 15 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['br'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,471 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-br
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8542
- Wer: 49.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1415 | 3.36 | 1000 | 0.7406 | 54.0117 |
| 0.0147 | 6.71 | 2000 | 0.7909 | 51.5479 |
| 0.0011 | 10.07 | 3000 | 0.8368 | 49.7710 |
| 0.0007 | 13.42 | 4000 | 0.8542 | 49.9817 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
| d31ff4dd1bab49ec71b0aabeae9e6443 |
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-10_sixties-0_s261 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 498 | false | # exp_w2v2r_es_vp-100k_age_teens-10_sixties-0_s261
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 684f1e00e1001367bd63bf48eec71002 |
202015004/wav2vec2-base-timit-demo-colab | 202015004 | wav2vec2 | 22 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,821 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6259
- Wer: 0.3544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6744 | 0.5 | 500 | 2.9473 | 1.0 |
| 1.4535 | 1.01 | 1000 | 0.7774 | 0.6254 |
| 0.7376 | 1.51 | 1500 | 0.6923 | 0.5712 |
| 0.5848 | 2.01 | 2000 | 0.5445 | 0.5023 |
| 0.4492 | 2.51 | 2500 | 0.5148 | 0.4958 |
| 0.4006 | 3.02 | 3000 | 0.5283 | 0.4781 |
| 0.3319 | 3.52 | 3500 | 0.5196 | 0.4628 |
| 0.3424 | 4.02 | 4000 | 0.5285 | 0.4551 |
| 0.2772 | 4.52 | 4500 | 0.5060 | 0.4532 |
| 0.2724 | 5.03 | 5000 | 0.5216 | 0.4422 |
| 0.2375 | 5.53 | 5500 | 0.5376 | 0.4443 |
| 0.2279 | 6.03 | 6000 | 0.6051 | 0.4308 |
| 0.2091 | 6.53 | 6500 | 0.5084 | 0.4423 |
| 0.2029 | 7.04 | 7000 | 0.5083 | 0.4242 |
| 0.1784 | 7.54 | 7500 | 0.6123 | 0.4297 |
| 0.1774 | 8.04 | 8000 | 0.5749 | 0.4339 |
| 0.1542 | 8.54 | 8500 | 0.5110 | 0.4033 |
| 0.1638 | 9.05 | 9000 | 0.6324 | 0.4318 |
| 0.1493 | 9.55 | 9500 | 0.6100 | 0.4152 |
| 0.1591 | 10.05 | 10000 | 0.5508 | 0.4022 |
| 0.1304 | 10.55 | 10500 | 0.5090 | 0.4054 |
| 0.1234 | 11.06 | 11000 | 0.6282 | 0.4093 |
| 0.1218 | 11.56 | 11500 | 0.5817 | 0.3941 |
| 0.121 | 12.06 | 12000 | 0.5741 | 0.3999 |
| 0.1073 | 12.56 | 12500 | 0.5818 | 0.4149 |
| 0.104 | 13.07 | 13000 | 0.6492 | 0.3953 |
| 0.0934 | 13.57 | 13500 | 0.5393 | 0.4083 |
| 0.0961 | 14.07 | 14000 | 0.5510 | 0.3919 |
| 0.0965 | 14.57 | 14500 | 0.5896 | 0.3992 |
| 0.0921 | 15.08 | 15000 | 0.5554 | 0.3947 |
| 0.0751 | 15.58 | 15500 | 0.6312 | 0.3934 |
| 0.0805 | 16.08 | 16000 | 0.6732 | 0.3948 |
| 0.0742 | 16.58 | 16500 | 0.5990 | 0.3884 |
| 0.0708 | 17.09 | 17000 | 0.6186 | 0.3869 |
| 0.0679 | 17.59 | 17500 | 0.5837 | 0.3848 |
| 0.072 | 18.09 | 18000 | 0.5831 | 0.3775 |
| 0.0597 | 18.59 | 18500 | 0.6562 | 0.3843 |
| 0.0612 | 19.1 | 19000 | 0.6298 | 0.3756 |
| 0.0514 | 19.6 | 19500 | 0.6746 | 0.3720 |
| 0.061 | 20.1 | 20000 | 0.6236 | 0.3788 |
| 0.054 | 20.6 | 20500 | 0.6012 | 0.3718 |
| 0.0521 | 21.11 | 21000 | 0.6053 | 0.3778 |
| 0.0494 | 21.61 | 21500 | 0.6154 | 0.3772 |
| 0.0468 | 22.11 | 22000 | 0.6052 | 0.3747 |
| 0.0413 | 22.61 | 22500 | 0.5877 | 0.3716 |
| 0.0424 | 23.12 | 23000 | 0.5786 | 0.3658 |
| 0.0403 | 23.62 | 23500 | 0.5828 | 0.3658 |
| 0.0391 | 24.12 | 24000 | 0.5913 | 0.3685 |
| 0.0312 | 24.62 | 24500 | 0.5850 | 0.3625 |
| 0.0316 | 25.13 | 25000 | 0.6029 | 0.3611 |
| 0.0282 | 25.63 | 25500 | 0.6312 | 0.3624 |
| 0.0328 | 26.13 | 26000 | 0.6312 | 0.3621 |
| 0.0258 | 26.63 | 26500 | 0.5891 | 0.3581 |
| 0.0256 | 27.14 | 27000 | 0.6259 | 0.3546 |
| 0.0255 | 27.64 | 27500 | 0.6315 | 0.3587 |
| 0.0249 | 28.14 | 28000 | 0.6547 | 0.3579 |
| 0.025 | 28.64 | 28500 | 0.6237 | 0.3565 |
| 0.0228 | 29.15 | 29000 | 0.6187 | 0.3559 |
| 0.0209 | 29.65 | 29500 | 0.6259 | 0.3544 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
| 29617632bf4345cffdcbacb5ea552f4c |
gazzehamine/wav2vec2-base-timit-demo-google-colab | gazzehamine | wav2vec2 | 14 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,959 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5707
- Wer: 0.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5072 | 1.0 | 500 | 1.8786 | 0.9741 |
| 0.8836 | 2.01 | 1000 | 0.5147 | 0.5317 |
| 0.4576 | 3.01 | 1500 | 0.4774 | 0.4591 |
| 0.3056 | 4.02 | 2000 | 0.4393 | 0.4343 |
| 0.2349 | 5.02 | 2500 | 0.4404 | 0.4022 |
| 0.1946 | 6.02 | 3000 | 0.4564 | 0.3991 |
| 0.1624 | 7.03 | 3500 | 0.4428 | 0.3947 |
| 0.1421 | 8.03 | 4000 | 0.4312 | 0.3878 |
| 0.131 | 9.04 | 4500 | 0.4345 | 0.3853 |
| 0.1115 | 10.04 | 5000 | 0.4318 | 0.3753 |
| 0.1024 | 11.04 | 5500 | 0.5053 | 0.3798 |
| 0.0895 | 12.05 | 6000 | 0.5044 | 0.3782 |
| 0.0856 | 13.05 | 6500 | 0.4893 | 0.3665 |
| 0.0755 | 14.06 | 7000 | 0.4868 | 0.3662 |
| 0.0724 | 15.06 | 7500 | 0.5084 | 0.3681 |
| 0.0635 | 16.06 | 8000 | 0.5367 | 0.3530 |
| 0.0603 | 17.07 | 8500 | 0.5255 | 0.3604 |
| 0.0609 | 18.07 | 9000 | 0.5407 | 0.3678 |
| 0.0486 | 19.08 | 9500 | 0.5312 | 0.3630 |
| 0.047 | 20.08 | 10000 | 0.5498 | 0.3518 |
| 0.0437 | 21.08 | 10500 | 0.5326 | 0.3571 |
| 0.0379 | 22.09 | 11000 | 0.5644 | 0.3608 |
| 0.035 | 23.09 | 11500 | 0.5956 | 0.3539 |
| 0.0333 | 24.1 | 12000 | 0.5967 | 0.3517 |
| 0.0289 | 25.1 | 12500 | 0.5274 | 0.3399 |
| 0.0268 | 26.1 | 13000 | 0.5609 | 0.3406 |
| 0.0256 | 27.11 | 13500 | 0.5451 | 0.3448 |
| 0.0249 | 28.11 | 14000 | 0.5804 | 0.3413 |
| 0.0236 | 29.12 | 14500 | 0.5707 | 0.3388 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
| c7a30772298e3dd12d38270cdb251657 |
cahya/wav2vec2-luganda | cahya | wav2vec2 | 27 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['lg'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'common_voice', 'hf-asr-leaderboard', 'lg', 'robust-speech-event', 'speech'] | true | true | true | 4,067 | false |
# Automatic Speech Recognition for Luganda
This is the model built for the
[Mozilla Luganda Automatic Speech Recognition competition](https://zindi.africa/competitions/mozilla-luganda-automatic-speech-recognition).
It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Luganda Common Voice dataset](https://huggingface.co/datasets/common_voice) version 7.0.
We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/luganda-asr) to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "‘", "’", "’"]
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
WER without KenLM: 15.38 %
WER With KenLM:
**Test Result**: 7.53 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/luganda-asr)
| e1138a2539c014c7ba783709d1ab461c |
timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k | timm | null | 4 | 31 | timm | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagenet-1k', 'laion-2b'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'timm'] | false | true | true | 24,155 | false | # Model card for convnext_large_mlp.clip_laion2b_augreg_ft_in1k
A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-1k in `timm` by Ross Wightman.
Please see related OpenCLIP model cards for more details on pretrain:
* https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 200.1
- GMACs: 44.9
- Activations (M): 56.3
- Image size: 256 x 256
- **Papers:**
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- **Original:** https://github.com/mlfoundations/open_clip
- **Pretrain Dataset:** LAION-2B
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('convnext_large_mlp.clip_laion2b_augreg_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_augreg_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for convnext_base:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_augreg_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
### By Throughput (samples / sec)
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
| db810d87bf81e8f098453eaed1b55181 |
Shaier/pubmedqa_roberta_large | Shaier | roberta | 11 | 2 | transformers | 0 | multiple-choice | true | false | false | mit | null | ['pubmed_qa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,160 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedqa_roberta_large
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the pubmed_qa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 25
- total_train_batch_size: 50
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 3 | 10 | 0.9957 | 0.552 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.11.0
| 70b6c55fa750b4083c801cdecbb7c362 |
plpkpjph/color_extraction_2023_02_09_v2-finetuned-ner | plpkpjph | distilbert | 10 | 33 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 966 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# color_extraction_2023_02_09_v2-finetuned-ner
This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 4f13c6f44da454f4ac7b033bbefcfaa7 |
sentence-transformers/average_word_embeddings_levy_dependency | sentence-transformers | null | 8 | 0 | sentence-transformers | 0 | sentence-similarity | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity'] | false | true | true | 2,021 | false |
# average_word_embeddings_levy_dependency
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/average_word_embeddings_levy_dependency')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/average_word_embeddings_levy_dependency)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(174016, 300)
)
(1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | d3490e5296e7119ca7a616899c520d8e |
jonatasgrosman/exp_w2v2t_es_xlsr-53_s103 | jonatasgrosman | wav2vec2 | 10 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 461 | false | # exp_w2v2t_es_xlsr-53_s103
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 4afb205dca91a37301e971ab71823466 |
AbrahamSanders/opt-2.7b-realtime-chat | AbrahamSanders | opt | 12 | 47 | transformers | 1 | text-generation | true | false | false | cc-by-3.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,225 | false |
Base model [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b)
Fine-tuned for causal language modeling of transcribed spoken dialogue from the [TalkBank CABank collection](https://ca.talkbank.org/access/).
Training corpora include:
- [CABNC](https://ca.talkbank.org/access/CABNC.html) - Spoken language segment of the British National Corpus
- [CallFriend English (N)](https://ca.talkbank.org/access/CallFriend/eng-n.html) - Phone calls
- [CallFriend English (S)](https://ca.talkbank.org/access/CallFriend/eng-s.html) - Phone calls
- [CallHome English](https://ca.talkbank.org/access/CallHome/eng.html) - Phone calls
- [GCSAusE](https://ca.talkbank.org/access/GCSAusE.html) - Australian conversations
- [ISL](https://ca.talkbank.org/access/ISL.html) - Conversations recorded to test ASR methods for meeting
- [MICASE](https://ca.talkbank.org/access/MICASE.html) - Michigan Corpus of Academic Spoken English
- [SCoSE](https://ca.talkbank.org/access/SCoSE.html) - The Saarbrücken Corpus of Spoken (American) English.
(Corpus descriptions are from TalkBank)
**Data input format:**
The data format models a sequence of spoken dialogue between two or more participants:
- The sequence is prefixed with information about the participants including name (can be a proper noun, a title/role, or unknown), age (can be a number or unknown), and sex (can be male, female, other, unknown).
- It then proceeds to sequentially list all utterances in the conversation, each prefixed with their participant code (S1, S2, S3, etc.).
- Utterances support a limited set of transcription notations in the [CHAT & CHAT-CA](https://talkbank.org/manuals/CHAT.pdf) formats:
- Pauses: `(.)` for a generic short pause, or `(N.N)` for a timed pause. For example `(3.4)` is a pause for 3.4 seconds.
- Non-verbal sounds: `&=laughs`, `&=cough`, `&=breathes`, `&=click`, etc. Anything describing a speaker-produced non-verbal sound can come after a prefix of `&=`
- Comments about speaker or setting: `[% baby crying in background]`, `[% smiling]`, `[% phone clicking noise]`, `[% imitating him]`, etc.
Anything describing the state of the speaker or environment can be in this block. Also, a comment block can be used to describe speaker-produced sounds, but it is more common to use the `&=` prefix for that.
- Unknown or unintelligible utterances: `xxx`
- Breathing: `hhh`
**Example:**
<span style="color:red"><participant></span> S1 (name: Dave, age: 33, sex: male) <span style="color:red"><participant></span> S2 (name: unknown, age: unknown, sex: unknown) <span style="color:red"><dialog></span> <span style="color:orange">S1:</span> Hi! (2.3) are you there? <span style="color:orange">S2:</span> hhh hhh [% background noise] uh yeah (0.8) I can hear you. (1.2) &=cough can you hear me? <span style="color:orange">S1:</span> ...
**Usage Info:**
Per the [OPT documentation](https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/opt), the model was trained with tokenizer setting `use_fast=False`.
To use this model for real-time inference in a continuous duplex dialogue system, see: [https://github.com/AbrahamSanders/realtime-chatbot](https://github.com/AbrahamSanders/realtime-chatbot). | 349b09690e7af2ef9d83b60cccbe83d0 |
creat89/NER_FEDA_Pl | creat89 | bert | 7 | 1 | transformers | 0 | null | true | false | false | mit | ['pl'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['polish_bert', 'ner'] | false | true | true | 790 | false |
This is a Polish NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on Polish BERT and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
5. KPWr (EVT, LOC, ORG, PER, PRO)
6. NKJP (DATE, GEOPOLIT, LOC, ORG, PER, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical
You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). | a1cc6b195ecf389a44c7b974841c010e |
ku-nlp/deberta-v2-base-japanese | ku-nlp | deberta-v2 | 8 | 2,319 | transformers | 14 | fill-mask | true | false | false | cc-by-sa-4.0 | ['ja'] | ['wikipedia', 'cc100', 'oscar'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deberta', 'deberta-v2', 'fill-mask'] | false | true | true | 3,306 | false |
# Model Card for Japanese DeBERTa V2 base
## Model description
This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-base-japanese')
model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-base-japanese')
sentence = '京都 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can also fine-tune this model on downstream tasks.
## Tokenization
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) was used for pre-training. Each word is tokenized into subwords by [sentencepiece](https://github.com/google/sentencepiece).
## Training data
We used the following corpora for pre-training:
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
## Training procedure
We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp).
Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese DeBERTa model using [transformers](https://github.com/huggingface/transformers) library.
The training took three weeks using 8 NVIDIA A100-SXM4-40GB GPUs.
The following hyperparameters were used during pre-training:
- learning_rate: 2e-4
- per_device_train_batch_size: 44
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 6
- total_train_batch_size: 2,112
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear schedule with warmup
- training_steps: 500,000
- warmup_steps: 10,000
The accuracy of the trained model on the masked language modeling task was 0.779.
The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
## Fine-tuning on NLU tasks
<!-- https://github.com/yahoojapan/JGLUE -->
Coming soon.
## Acknowledgments
This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models".
For training models, we used the mdx: a platform for the data-driven future.
| 879d24cf3c10c124f8fdae564f20b948 |
zdreiosis/ff_analysis_5 | zdreiosis | bert | 25 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['gen_ffa', 'generated_from_trainer'] | true | true | true | 2,019 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ff_analysis_5
This model is a fine-tuned version of [zdreiosis/ff_analysis_5](https://huggingface.co/zdreiosis/ff_analysis_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- F1: 0.9306
- Roc Auc: 0.9483
- Accuracy: 0.8137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 0.27 | 50 | 0.0846 | 0.9305 | 0.9476 | 0.8075 |
| No log | 0.55 | 100 | 0.1000 | 0.9070 | 0.9320 | 0.7484 |
| No log | 0.82 | 150 | 0.0945 | 0.9126 | 0.9349 | 0.7640 |
| No log | 1.1 | 200 | 0.0973 | 0.9119 | 0.9353 | 0.7764 |
| No log | 1.37 | 250 | 0.0880 | 0.9336 | 0.9504 | 0.8261 |
| No log | 1.65 | 300 | 0.0857 | 0.9246 | 0.9434 | 0.8043 |
| No log | 1.92 | 350 | 0.0844 | 0.9324 | 0.9488 | 0.8199 |
| No log | 2.2 | 400 | 0.0881 | 0.9232 | 0.9450 | 0.7888 |
| No log | 2.47 | 450 | 0.0875 | 0.9277 | 0.9462 | 0.8012 |
| 0.1226 | 2.75 | 500 | 0.0824 | 0.9306 | 0.9483 | 0.8137 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
| 5c0cb73d485118984e45ddddf0d9b238 |
amanneo/mail-generator-mini | amanneo | gpt2 | 8 | 4 | transformers | 0 | text-generation | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,529 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amanneo/mail-generator-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.4613
- Train Accuracy: 0.1611
- Validation Loss: 5.2617
- Validation Accuracy: 0.1386
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -925, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 10.0053 | 0.1068 | 8.5247 | 0.1394 | 0 |
| 8.7772 | 0.1505 | 7.9685 | 0.1656 | 1 |
| 8.2057 | 0.1663 | 7.4436 | 0.1655 | 2 |
| 7.5786 | 0.1611 | 6.8572 | 0.1654 | 3 |
| 6.9698 | 0.1679 | 6.3646 | 0.1735 | 4 |
| 6.4911 | 0.1763 | 6.0124 | 0.1787 | 5 |
| 6.1632 | 0.1834 | 5.7751 | 0.1826 | 6 |
| 5.9057 | 0.1840 | 5.5786 | 0.1749 | 7 |
| 5.6874 | 0.1758 | 5.4023 | 0.1616 | 8 |
| 5.4613 | 0.1611 | 5.2617 | 0.1386 | 9 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
| 467fe7ae31dccb1e04178ae6bfaf63fb |
pig4431/TUF_roBERTa_5E | pig4431 | roberta | 11 | 3 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,212 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2136
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4665 | 0.1 | 50 | 0.2587 | 0.9333 |
| 0.245 | 0.2 | 100 | 0.1355 | 0.96 |
| 0.2079 | 0.3 | 150 | 0.1454 | 0.9533 |
| 0.2098 | 0.4 | 200 | 0.1809 | 0.9533 |
| 0.1637 | 0.5 | 250 | 0.2299 | 0.94 |
| 0.1869 | 0.59 | 300 | 0.1324 | 0.9667 |
| 0.2202 | 0.69 | 350 | 0.1786 | 0.9467 |
| 0.2084 | 0.79 | 400 | 0.1541 | 0.9533 |
| 0.148 | 0.89 | 450 | 0.1790 | 0.9533 |
| 0.1945 | 0.99 | 500 | 0.1168 | 0.9667 |
| 0.1648 | 1.09 | 550 | 0.1153 | 0.96 |
| 0.1099 | 1.19 | 600 | 0.1239 | 0.96 |
| 0.1238 | 1.29 | 650 | 0.1486 | 0.9533 |
| 0.1067 | 1.39 | 700 | 0.1195 | 0.96 |
| 0.1324 | 1.49 | 750 | 0.1134 | 0.96 |
| 0.1128 | 1.58 | 800 | 0.1180 | 0.9667 |
| 0.1406 | 1.68 | 850 | 0.2081 | 0.9533 |
| 0.1516 | 1.78 | 900 | 0.1987 | 0.9533 |
| 0.1537 | 1.88 | 950 | 0.1644 | 0.96 |
| 0.0957 | 1.98 | 1000 | 0.1660 | 0.96 |
| 0.0699 | 2.08 | 1050 | 0.2057 | 0.9533 |
| 0.1007 | 2.18 | 1100 | 0.2336 | 0.9533 |
| 0.0677 | 2.28 | 1150 | 0.2399 | 0.9467 |
| 0.059 | 2.38 | 1200 | 0.2331 | 0.96 |
| 0.1051 | 2.48 | 1250 | 0.1974 | 0.9533 |
| 0.0778 | 2.57 | 1300 | 0.2857 | 0.9467 |
| 0.1099 | 2.67 | 1350 | 0.2641 | 0.9533 |
| 0.0747 | 2.77 | 1400 | 0.2219 | 0.9533 |
| 0.0874 | 2.87 | 1450 | 0.2780 | 0.9533 |
| 0.0675 | 2.97 | 1500 | 0.1993 | 0.96 |
| 0.052 | 3.07 | 1550 | 0.1918 | 0.96 |
| 0.0214 | 3.17 | 1600 | 0.2410 | 0.96 |
| 0.0512 | 3.27 | 1650 | 0.2353 | 0.96 |
| 0.0548 | 3.37 | 1700 | 0.2722 | 0.9533 |
| 0.0554 | 3.47 | 1750 | 0.1593 | 0.9733 |
| 0.0742 | 3.56 | 1800 | 0.2568 | 0.96 |
| 0.064 | 3.66 | 1850 | 0.2358 | 0.96 |
| 0.052 | 3.76 | 1900 | 0.2161 | 0.9667 |
| 0.0349 | 3.86 | 1950 | 0.2497 | 0.96 |
| 0.0868 | 3.96 | 2000 | 0.1834 | 0.9667 |
| 0.0445 | 4.06 | 2050 | 0.2441 | 0.9533 |
| 0.0388 | 4.16 | 2100 | 0.2136 | 0.9667 |
| 0.0484 | 4.26 | 2150 | 0.2114 | 0.9667 |
| 0.0263 | 4.36 | 2200 | 0.2325 | 0.96 |
| 0.0409 | 4.46 | 2250 | 0.2454 | 0.9533 |
| 0.0324 | 4.55 | 2300 | 0.2105 | 0.9667 |
| 0.0295 | 4.65 | 2350 | 0.2118 | 0.9667 |
| 0.0372 | 4.75 | 2400 | 0.2005 | 0.9667 |
| 0.0294 | 4.85 | 2450 | 0.2057 | 0.9667 |
| 0.0354 | 4.95 | 2500 | 0.2136 | 0.9667 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
| 16ce1b84537ca81f89bc48297105af7a |
dminiotas05/distilbert-base-uncased-finetuned-ft1500_norm300 | dminiotas05 | distilbert | 14 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,632 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft1500_norm300
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0940
- Mse: 4.3760
- Mae: 1.4084
- R2: 0.4625
- Accuracy: 0.3517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.7424 | 1.0 | 3122 | 1.1071 | 4.4286 | 1.4098 | 0.4561 | 0.3338 |
| 0.5038 | 2.0 | 6244 | 1.1794 | 4.7177 | 1.4140 | 0.4205 | 0.3677 |
| 0.356 | 3.0 | 9366 | 1.0717 | 4.2866 | 1.3852 | 0.4735 | 0.3581 |
| 0.2293 | 4.0 | 12488 | 1.0940 | 4.3760 | 1.4084 | 0.4625 | 0.3517 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 05169c9c8dde394ed62fd892284c4ed0 |
muhtasham/small-vanilla-target-glue-stsb | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,602 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-vanilla-target-glue-stsb
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5625
- Pearson: 0.8713
- Spearmanr: 0.8677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.823 | 2.78 | 500 | 0.5972 | 0.8689 | 0.8689 |
| 0.2951 | 5.56 | 1000 | 0.5683 | 0.8725 | 0.8710 |
| 0.181 | 8.33 | 1500 | 0.5985 | 0.8695 | 0.8657 |
| 0.1349 | 11.11 | 2000 | 0.5915 | 0.8708 | 0.8679 |
| 0.1067 | 13.89 | 2500 | 0.5625 | 0.8713 | 0.8677 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 30ce1f40384bb4e6c9d3d26efc438fdf |
Shenghao1993/distilbert-base-uncased-distilled-clinc | Shenghao1993 | distilbert | 10 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['clinc_oos'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,729 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3120
- Accuracy: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.8803 | 0.7426 |
| 2.2488 | 2.0 | 636 | 0.9662 | 0.8626 |
| 2.2488 | 3.0 | 954 | 0.5640 | 0.9103 |
| 0.8679 | 4.0 | 1272 | 0.4093 | 0.9332 |
| 0.4101 | 5.0 | 1590 | 0.3554 | 0.9435 |
| 0.4101 | 6.0 | 1908 | 0.3312 | 0.9445 |
| 0.2894 | 7.0 | 2226 | 0.3179 | 0.9452 |
| 0.2496 | 8.0 | 2544 | 0.3137 | 0.9448 |
| 0.2496 | 9.0 | 2862 | 0.3120 | 0.9455 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 3a398ecdb64a9d0cda7e459ee5e47eb8 |
Roy029/distilroberta-base-finetuned-wikitext2 | Roy029 | roberta | 11 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,267 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 58 | 2.2650 |
| No log | 2.0 | 116 | 2.2408 |
| No log | 3.0 | 174 | 2.1696 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| b5c491b3a8fb5d49aeacdba1d37bf999 |
youngjae/bert-finetuned-squad | youngjae | bert | 42 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 968 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0.dev20210415+cu101
- Datasets 1.16.1
- Tokenizers 0.10.3
| 9a0b6385347f5f1f80843c56967e7d6c |
piEsposito/braquad-bert-qna | piEsposito | bert | 9 | 10 | transformers | 1 | question-answering | true | false | true | apache-2.0 | ['pt-br'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering'] | false | true | true | 5,067 | false |
# BraQuAD BERT
## Model description
This is a question-answering model trained in BraQuAD 2.0, a version of SQuAD 2.0 translated to PT-BR using Google Cloud Translation API.
### Context
Edith Ranzini (São Paulo,[1] 1946) é uma engenheira brasileira formada pela USP, professora doutora da Pontifícia Universidade Católica de São Paulo[2] e professora sênior da Escola Politécnica da Universidade de São Paulo (Poli).[3] Ela compôs a equipe responsável pela criação do primeiro computador brasileiro, o Patinho Feio,[1] em 1972, e participou do grupo de instituidores da Fundação para o Desenvolvimento Tecnológico da Engenharia, sendo a única mulher do mesmo.[4][2] Atua nas áreas de inteligência artificial, engenharia de computação, redes neurais e sistemas gráficos.
Na sua época de prestar o vestibular, inscreveu-se para física na USP e para engenharia na Poli-USP,[3] sendo aprovada nesta última em 1965, ingressando como uma das 12 mulheres do total de 360 calouros.
Em 1969, formou-se como engenheira de eletricidade, permanecendo na universidade para fazer sua pós-graduação. Nessa época entrou para o Laboratório de Sistemas Digitais (LSD),atual Departamento de Engenharia de Computação e Sistemas Digitais, criado pelo professor Antônio Hélio Guerra Vieira.[3] Em 1970, deu início ao seu mestrado em Engenharia de Sistemas pela USP, concluindo o mesmo em 1975.[2] Nesse período, permaneceu no LSD e fez parte do grupo responsável pelo desenvolvimento do primeiro computador brasileiro, o Patinho Feio (1971-1972) e do G10 (1973-1975), primeiro computador brasileiro de médio porte, feito para o Grupo de trabalho Especial (GTE), posteriormente Digibras.
### Examples:
1-Alem do Patinho feio qual outro projeto edith trabalhou? Answer: G10
2-Quantas mulheres entraram na Poli em 1965? Answer: 12
3-Qual grande projeto edith trabalhou? Answer: do primeiro computador brasileiro
4-Qual o primeiro computador brasileiro? Answer: Patinho Feio
## Expected results
As for an example, let's show a context and some questions you can ask, as well as the expected responses. This QnA pairs were not part of the training dataset.
#### How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
import torch
mname = "piEsposito/braquad-bert-qna"
model = AutoModelForQuestionAnswering.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
context = """Edith Ranzini (São Paulo,[1] 1946) é uma engenheira brasileira formada pela USP, professora doutora da Pontifícia Universidade Católica de São Paulo[2] e professora sênior da Escola Politécnica da Universidade de São Paulo (Poli).[3] Ela compôs a equipe responsável pela criação do primeiro computador brasileiro, o Patinho Feio,[1] em 1972, e participou do grupo de instituidores da Fundação para o Desenvolvimento Tecnológico da Engenharia, sendo a única mulher do mesmo.[4][2] Atua nas áreas de inteligência artificial, engenharia de computação, redes neurais e sistemas gráficos.
Na sua época de prestar o vestibular, inscreveu-se para física na USP e para engenharia na Poli-USP,[3] sendo aprovada nesta última em 1965, ingressando como uma das 12 mulheres do total de 360 calouros.[5]
Em 1969, formou-se como engenheira de eletricidade,[2][3] permanecendo na universidade para fazer sua pós-graduação. Nessa época entrou para o Laboratório de Sistemas Digitais (LSD),atual Departamento de Engenharia de Computação e Sistemas Digitais, criado pelo professor Antônio Hélio Guerra Vieira.[3] Em 1970, deu início ao seu mestrado em Engenharia de Sistemas pela USP, concluindo o mesmo em 1975.[2] Nesse período, permaneceu no LSD e fez parte do grupo responsável pelo desenvolvimento do primeiro computador brasileiro, o Patinho Feio (1971-1972) e do G10 (1973-1975), primeiro computador brasileiro de médio porte, feito para o Grupo de trabalho Especial (GTE), posteriormente Digibras."""
# you can try this for all the examples above.
question = 'Qual grande projeto edith trabalhou?'
string = f"[CLS] {question} [SEP] {context} [SEP]"
as_tensor = torch.Tensor(tokenizer.encode(string)).unsqueeze(0)
starts, ends = model(as_tensor.long())
s, e = torch.argmax(starts[0]), torch.argmax(ends[0])
print(tokenizer.decode(tokenizer.encode(string)[s:e+1])) # 'do primeiro computador brasileiro'
```
#### Limitations and bias
- The model is trained on a dataset translated using Google Cloud API. Due to that, there are some issues with the labels, in some cases, not being identic to the answers. Due to that, the performance cannot reach the level it does with english, handly curated models. Anyway, it is a good progresso towards QnA in PT-BR.
## Training data
[BraQuAD dataset](https://github.com/piEsposito/br-quad-2.0).
## Training procedure
## Eval results
EM | F1
-------|---------
0.62 | 0.69
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020},
title={BraQuAD - Dataset para Question Answering em PT-BR},
author={Esposito, Wladimir and Esposito, Piero and Tamais, Ana},
}
```
| 4da550ec2c6cee8320f58dbe962ab5fa |
Shahm/bart-german | Shahm | bart | 18 | 1,309 | transformers | 1 | summarization | true | false | false | apache-2.0 | ['de'] | ['mlsum'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'summarization'] | true | true | true | 1,094 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mode-bart-deutsch
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2152
- Rouge1: 41.698
- Rouge2: 31.3548
- Rougel: 38.2817
- Rougelsum: 39.6349
- Gen Len: 63.1723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 0b38f66dcbf5613b074079a9e0b20ffa |
MBMMurad/wav2vec2-base-cvbn-voted_30pochs | MBMMurad | wav2vec2 | 13 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['cvbn'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,219 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cvbn-voted_30pochs
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the cvbn dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2136
- eval_wer: 0.3208
- eval_runtime: 335.1421
- eval_samples_per_second: 8.951
- eval_steps_per_second: 0.561
- epoch: 5.82
- step: 13600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
| 86f5caa94e10fe436506686f9fc8dc9b |
Helsinki-NLP/opus-mt-fr-efi | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-fr-efi
* source languages: fr
* target languages: efi
* OPUS readme: [fr-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.efi | 26.9 | 0.462 |
| c022fff24f9afdd1f1097d11147778e1 |
WillHeld/t5-small-vanilla-mtop | WillHeld | mt5 | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,189 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-vanilla-mtop
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1581
- Exact Match: 0.6331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|
| 1.5981 | 6.65 | 200 | 0.1598 | 0.4940 |
| 0.1335 | 13.33 | 400 | 0.1155 | 0.5884 |
| 0.074 | 19.98 | 600 | 0.1046 | 0.6094 |
| 0.0497 | 26.65 | 800 | 0.1065 | 0.6139 |
| 0.0363 | 33.33 | 1000 | 0.1134 | 0.6255 |
| 0.0278 | 39.98 | 1200 | 0.1177 | 0.6313 |
| 0.022 | 46.65 | 1400 | 0.1264 | 0.6255 |
| 0.0183 | 53.33 | 1600 | 0.1260 | 0.6304 |
| 0.0151 | 59.98 | 1800 | 0.1312 | 0.6300 |
| 0.0124 | 66.65 | 2000 | 0.1421 | 0.6277 |
| 0.0111 | 73.33 | 2200 | 0.1405 | 0.6277 |
| 0.0092 | 79.98 | 2400 | 0.1466 | 0.6331 |
| 0.008 | 86.65 | 2600 | 0.1522 | 0.6340 |
| 0.007 | 93.33 | 2800 | 0.1590 | 0.6295 |
| 0.0064 | 99.98 | 3000 | 0.1581 | 0.6331 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
| 04dd71afa7f04eed598fcb44e8b322de |
MultiBertGunjanPatrick/multiberts-seed-2-600k | MultiBertGunjanPatrick | bert | 7 | 4 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-2'] | false | true | true | 6,483 | false | # MultiBERTs Seed 2 Checkpoint 600k (uncased)
Seed 2 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-600k')
model = BertModel.from_pretrained("multiberts-seed-2-600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| cb9236a62ab10ab20c36e2664ed502e0 |
slplab/wav2vec2-xls-r-300m_phoneme-mfa_korean_nia13-asia-9634_001 | slplab | wav2vec2 | 11 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,063 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m_phoneme-mfa_korean_nia13-asia-9634_001
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NIA13 ASIA dataset.
Creator & Uploader: Jooyoung Lee ([email protected])
## Training and evaluation data
Training Data
- Data Name: NIA13 ASIA
- Num. of Samples: 9,634
- Audio Length: 9H 42M
Test Data 1 (In-domain)
- Data Name: NIA13 ASIA
- Num. of Samples: 3,707
- Audio Length: 3H 37M
Test Data 2 (Out-of-domain)
- Data Name: SAMSUNG_60K
- Num. of Samples: 6,000
- Audio Length: 12 Hrs
### Training hyperparameters

### Training results
- Phone Error Rate on Test Data 1: 00.00%
- Phone Error Rate on Test Data 2: 00.00%
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 | 8172d98676d438253af0d40590f20a7c |
google/multiberts-seed_2-step_800k | google | bert | 8 | 12 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_800k'] | false | true | true | 3,521 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 2, Step 800k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #2, captured at step 800k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_800k')
model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_800k')
model = BertModel.from_pretrained("google/multiberts-seed_2-step_800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| 606c0703b78e26d5a7521c3650b89a45 |
infinitejoy/wav2vec2-large-xls-r-300m-odia | infinitejoy | wav2vec2 | 20 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['or'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'or', 'robust-speech-event'] | true | true | true | 3,523 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-odia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - OR dataset.
It achieves the following results on the evaluation set:
```
python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config as --split test --log_outputs
```
- WER: 1.0921052631578947
- CER: 2.5547945205479454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Training machine details
- Platform: Linux-5.11.0-37-generic-x86_64-with-glibc2.10
- CPU cores: 60
- Python version: 3.8.8
- PyTorch version: 1.10.1+cu102
- GPU is visible: True
- Transformers version: 4.16.0.dev0
- Datasets version: 1.17.1.dev0
- soundfile version: 0.10.3
Training script
```bash
python run_speech_recognition_ctc.py \
--dataset_name="mozilla-foundation/common_voice_7_0" \
--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
--dataset_config_name="or" \
--output_dir="./wav2vec2-large-xls-r-300m-odia" \
--overwrite_output_dir \
--num_train_epochs="120" \
--per_device_train_batch_size="16" \
--per_device_eval_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="7.5e-5" \
--warmup_steps="500" \
--length_column_name="input_length" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — \’ … \– \' \’ \– \
--save_steps="500" \
--eval_steps="500" \
--logging_steps="100" \
--layerdrop="0.0" \
--activation_dropout="0.1" \
--save_total_limit="3" \
--freeze_feature_encoder \
--feat_proj_dropout="0.0" \
--mask_time_prob="0.75" \
--mask_time_length="10" \
--mask_feature_prob="0.25" \
--mask_feature_length="64" \
--gradient_checkpointing \
--use_auth_token \
--fp16 \
--group_by_length \
--do_train --do_eval \
--push_to_hub
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 120.0
- mixed_precision_training: Native AMP
### Training results
| | eval_loss | eval_wer | eval_runtime | eval_samples_per_second | eval_steps_per_second | epoch |
|---:|------------:|-----------:|---------------:|--------------------------:|------------------------:|--------:|
| 0 | 3.35224 | 0.998972 | 5.0475 | 22.189 | 1.387 | 29.41 |
| 1 | 1.33679 | 0.938335 | 5.0633 | 22.12 | 1.382 | 58.82 |
| 2 | 0.737202 | 0.957862 | 5.0913 | 21.998 | 1.375 | 88.24 |
| 3 | 0.658212 | 0.96814 | 5.0953 | 21.981 | 1.374 | 117.65 |
| 4 | 0.658 | 0.9712 | 5.0953 | 22.115 | 1.382 | 120 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| 555df3a24529c9ecd9faf40ad4dc7bde |
w11wo/wav2vec2-xls-r-300m-korean | w11wo | wav2vec2 | 26 | 223 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ko'] | ['kresnik/zeroth_korean'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event'] | true | true | true | 7,418 | false |
# Wav2Vec2 XLS-R 300M Korean
Wav2Vec2 XLS-R 300M Korean is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Zeroth Korean](https://huggingface.co/datasets/kresnik/zeroth_korean) dataset.
This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH.
All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------- | ------- | ----- | ------------------------------- |
| `wav2vec2-xls-r-300m-korean` | 300M | XLS-R | `Zeroth Korean` Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | WER | CER |
| -------------------------------- | ------ | ------ | ------ |
| `Zeroth Korean` | 0.2089 | 29.54% | 9.53% |
| `Robust Speech Event - Dev Data` | N/A | 76.26% | 38.67% |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 7.5e-05
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 2000
- `num_epochs`: 50.0
- `mixed_precision_training`: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
| :-----------: | :---: | :---: | :-------------: | :----: | :----: |
| 19.7138 | 0.72 | 500 | 19.6427 | 1.0 | 1.0 |
| 4.8039 | 1.44 | 1000 | 4.7842 | 1.0 | 1.0 |
| 4.5619 | 2.16 | 1500 | 4.5608 | 0.9992 | 0.9598 |
| 4.254 | 2.88 | 2000 | 4.2729 | 0.9955 | 0.9063 |
| 4.1905 | 3.6 | 2500 | 4.2257 | 0.9903 | 0.8758 |
| 4.0683 | 4.32 | 3000 | 3.9294 | 0.9937 | 0.7911 |
| 3.486 | 5.04 | 3500 | 2.7045 | 1.0012 | 0.5934 |
| 2.946 | 5.75 | 4000 | 1.9691 | 0.9425 | 0.4634 |
| 2.634 | 6.47 | 4500 | 1.5212 | 0.8807 | 0.3850 |
| 2.4066 | 7.19 | 5000 | 1.2551 | 0.8177 | 0.3601 |
| 2.2651 | 7.91 | 5500 | 1.0423 | 0.7650 | 0.3039 |
| 2.1828 | 8.63 | 6000 | 0.9599 | 0.7273 | 0.3106 |
| 2.1023 | 9.35 | 6500 | 0.9482 | 0.7161 | 0.3063 |
| 2.0536 | 10.07 | 7000 | 0.8242 | 0.6767 | 0.2860 |
| 1.9803 | 10.79 | 7500 | 0.7643 | 0.6563 | 0.2637 |
| 1.9468 | 11.51 | 8000 | 0.7319 | 0.6441 | 0.2505 |
| 1.9178 | 12.23 | 8500 | 0.6937 | 0.6320 | 0.2489 |
| 1.8515 | 12.95 | 9000 | 0.6443 | 0.6053 | 0.2196 |
| 1.8083 | 13.67 | 9500 | 0.6286 | 0.6122 | 0.2148 |
| 1.819 | 14.39 | 10000 | 0.6015 | 0.5986 | 0.2074 |
| 1.7684 | 15.11 | 10500 | 0.5682 | 0.5741 | 0.1982 |
| 1.7195 | 15.83 | 11000 | 0.5385 | 0.5592 | 0.2007 |
| 1.7044 | 16.55 | 11500 | 0.5362 | 0.5524 | 0.2097 |
| 1.6879 | 17.27 | 12000 | 0.5119 | 0.5489 | 0.2083 |
| 1.656 | 17.98 | 12500 | 0.4990 | 0.5362 | 0.1968 |
| 1.6122 | 18.7 | 13000 | 0.4561 | 0.5092 | 0.1900 |
| 1.5919 | 19.42 | 13500 | 0.4778 | 0.5225 | 0.1975 |
| 1.5896 | 20.14 | 14000 | 0.4563 | 0.5098 | 0.1859 |
| 1.5589 | 20.86 | 14500 | 0.4362 | 0.4940 | 0.1725 |
| 1.5353 | 21.58 | 15000 | 0.4140 | 0.4826 | 0.1580 |
| 1.5441 | 22.3 | 15500 | 0.4031 | 0.4742 | 0.1550 |
| 1.5116 | 23.02 | 16000 | 0.3916 | 0.4748 | 0.1545 |
| 1.4731 | 23.74 | 16500 | 0.3841 | 0.4810 | 0.1542 |
| 1.4647 | 24.46 | 17000 | 0.3752 | 0.4524 | 0.1475 |
| 1.4328 | 25.18 | 17500 | 0.3587 | 0.4476 | 0.1461 |
| 1.4129 | 25.9 | 18000 | 0.3429 | 0.4242 | 0.1366 |
| 1.4062 | 26.62 | 18500 | 0.3450 | 0.4251 | 0.1355 |
| 1.3928 | 27.34 | 19000 | 0.3297 | 0.4145 | 0.1322 |
| 1.3906 | 28.06 | 19500 | 0.3210 | 0.4185 | 0.1336 |
| 1.358 | 28.78 | 20000 | 0.3131 | 0.3970 | 0.1275 |
| 1.3445 | 29.5 | 20500 | 0.3069 | 0.3920 | 0.1276 |
| 1.3159 | 30.22 | 21000 | 0.3035 | 0.3961 | 0.1255 |
| 1.3044 | 30.93 | 21500 | 0.2952 | 0.3854 | 0.1242 |
| 1.3034 | 31.65 | 22000 | 0.2966 | 0.3772 | 0.1227 |
| 1.2963 | 32.37 | 22500 | 0.2844 | 0.3706 | 0.1208 |
| 1.2765 | 33.09 | 23000 | 0.2841 | 0.3567 | 0.1173 |
| 1.2438 | 33.81 | 23500 | 0.2734 | 0.3552 | 0.1137 |
| 1.2487 | 34.53 | 24000 | 0.2703 | 0.3502 | 0.1118 |
| 1.2249 | 35.25 | 24500 | 0.2650 | 0.3484 | 0.1142 |
| 1.2229 | 35.97 | 25000 | 0.2584 | 0.3374 | 0.1097 |
| 1.2374 | 36.69 | 25500 | 0.2568 | 0.3337 | 0.1095 |
| 1.2153 | 37.41 | 26000 | 0.2494 | 0.3327 | 0.1071 |
| 1.1925 | 38.13 | 26500 | 0.2518 | 0.3366 | 0.1077 |
| 1.1908 | 38.85 | 27000 | 0.2437 | 0.3272 | 0.1057 |
| 1.1858 | 39.57 | 27500 | 0.2396 | 0.3265 | 0.1044 |
| 1.1808 | 40.29 | 28000 | 0.2373 | 0.3156 | 0.1028 |
| 1.1842 | 41.01 | 28500 | 0.2356 | 0.3152 | 0.1026 |
| 1.1668 | 41.73 | 29000 | 0.2319 | 0.3188 | 0.1025 |
| 1.1448 | 42.45 | 29500 | 0.2293 | 0.3099 | 0.0995 |
| 1.1327 | 43.17 | 30000 | 0.2265 | 0.3047 | 0.0979 |
| 1.1307 | 43.88 | 30500 | 0.2222 | 0.3078 | 0.0989 |
| 1.1419 | 44.6 | 31000 | 0.2215 | 0.3038 | 0.0981 |
| 1.1231 | 45.32 | 31500 | 0.2193 | 0.3013 | 0.0972 |
| 1.139 | 46.04 | 32000 | 0.2162 | 0.3007 | 0.0968 |
| 1.1114 | 46.76 | 32500 | 0.2122 | 0.2982 | 0.0960 |
| 1.111 | 47.48 | 33000 | 0.2125 | 0.2946 | 0.0948 |
| 1.0982 | 48.2 | 33500 | 0.2099 | 0.2957 | 0.0953 |
| 1.109 | 48.92 | 34000 | 0.2092 | 0.2955 | 0.0955 |
| 1.0905 | 49.64 | 34500 | 0.2088 | 0.2954 | 0.0953 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R 300M Korean was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on OVH Cloud.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.10.3
| 7a16d458b988cdd877aa5b7fe439a139 |
Parvinder/my_awesome_qa_model | Parvinder | distilbert | 16 | 1 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 908 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| c9ac8d6934e415ea1cad8715667533ba |
Callidior/bert2bert-base-arxiv-titlegen | Callidior | encoder-decoder | 7 | 133 | transformers | 4 | summarization | true | false | false | apache-2.0 | ['en'] | ['arxiv_dataset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization'] | false | true | true | 495 | false |
# Paper Title Generator
Generates titles for computer science papers given an abstract.
The model is a BERT2BERT Encoder-Decoder using the official `bert-base-uncased` checkpoint as initialization for the encoder and decoder.
It was fine-tuned on 318,500 computer science papers posted on arXiv.org between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data.
**Live Demo:** [https://paper-titles.ey.r.appspot.com/](https://paper-titles.ey.r.appspot.com/) | f6b28b98a16aa1e83e31b313f84b4a82 |
itchy/donut-base-sroie | itchy | vision-encoder-decoder | 16 | 0 | transformers | 0 | null | true | false | false | mit | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 981 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.13.0
| 4ae8e7eec904b13fec34303b84b971ca |
Salesforce/blip-itm-large-flickr | Salesforce | blip | 9 | 36 | transformers | 1 | null | true | false | false | bsd-3-clause | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['image-text-matching'] | false | true | true | 4,736 | false |
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for BLIP trained on image-text matching - large architecture (with ViT large backbone) trained on Flickr30k dataset.
|  |
|:--:|
| <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-flickr")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-flickr")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-flickr")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-flickr").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-flickr")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-flickr", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | a086a4fabe97acc2b20bb07190131897 |
cdefghijkl/luber | cdefghijkl | null | 21 | 44 | diffusers | 1 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 2,173 | false | Elldreth's Lucid Mix and URPM merged add difference. These are two of my all time favorites. Credit to them.
As you can see, the example images below, simple prompts can generate good results. I'm not yet trying much of this model so I hope you all can comment and upload your results using this model here.
I hope you all like this model. Use fp16 to generate better results. Full one for training base model only.
A few of generated result from this model:

Prompts: painting of bald man crying in the rain
Negative: -

Prompts: rendering of ginger cat swimming on sea
Negative: -

Prompts: painting of silver haired man brown jacket sitting crying on a rocking chair inside a cabin with his big white fur dog beside him, winter, night
Negative: -
Inspired by Fumetsu no Anata e

Prompts: rendering of apocalypse
Negative: -

Prompts: painting of a demon king crying in the rain because his wife is asking him to buy her a Chanel bag
Negative: -

Prompts: painting of cyberpunk ant
Negative: -
For more results, you can see it on my civitai page: https://civitai.com/models/4204/luber | 824bb138d6c6b6785da7455595803661 |
ThomasSimonini/ML-Agents-SnowballFight-1vs1 | ThomasSimonini | null | 8 | 3 | ml-agents | 3 | reinforcement-learning | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-reinforcement-learning', 'reinforcement-learning', 'ml-agents'] | false | true | true | 3,741 | false |
# Snowball Fight ☃️, a multi-agent environment for ML-Agents made by Hugging Face

A multi-agent environment using Unity ML-Agents Toolkit where two agents compete in a 1vs1 snowball fight game.
👉 You can [play it online at this link](https://huggingface.co/spaces/ThomasSimonini/SnowballFight).
⚠️ You need to have some skills in ML-Agents if you want to use it if it's not the case [check the documentation](https://github.com/Unity-Technologies/ml-agents/tree/main/docs)
## The Environment
- Two agents compete **in a 1 vs 1 snowball fight game**.
- The goal is to **hit the opponent team while avoiding the opponent's snowballs ❄️**.
### Observation Space
- Ray-casts:
- **10 ray-casts forward** distributed over 100 degrees: detecting opponent.
- **10 ray-casts forward** distributed over 100 degrees: detecting walls, shelter and frontier.
- **10 ray-casts forward** distributed over 100 degrees: detecting snowballs.
- **3 ray-casts backward** distributed over 45 degrees: detecting wall and shelter.
- Vector Observations:
- **Bool canShoot** (you can only shoot a snowball every 2 seconds).
- **Float currentHealth**: normalized [0, 1]
- **Vector3 vertical speed**
- **Vector3 horizontal speed**
- **Vector3 "home" position**
### Action Space (Discrete)
- Vector Action space:
- **Four branched actions** corresponding to forward, backward, sideways movement, rotation, and snowball shoot.
### Agent Reward Function (dependant):
- If the team is **injured**:
- 0.1 to the shooter.
- If the team is **dead**:
- (1 - accumulated time penalty): when a snowball hits the
opponent, the accumulated time penalty decreases by (1 / MaxStep) every fixed update and is reset to 0 at the beginning of an episode.
- (-1) When a snowball hit our team.
### Addendum
- There **is no friendly fire**, which means that an agent can't shoot himself, or in the future, in a 2vs2 game can't shoot a teammate.
## How to use it
### Set-up the environment
1. Clone this project `git clone https://huggingface.co/ThomasSimonini/ML-Agents-SnowballFight-1vs1`
2. Open Unity Hub and create a new 3D Project
3. In the cloned project folder, open `.\ML-Agents-SnowballFight-1vs1\packages` and copy manifest.json and package.lock.json
4. Paste these two files in `Your Unity Project\Packages` => this will install the required packages.
5. Drop the SnowballFight-1vs1 unity package to your Unity Project.
### Watch the trained agents
6. If you want to watch the trained agents, open `Assets\1vs1\Scenes\1vs1_v2_Training.` place the `\ML-Agents-SnowballFight-1vs1\saved_model\SnowballFight1vs1-4999988.onnx` into BlueAgent and PurpleAgent Model.
### Train, the agent
6. If you want to train it again, the scene is `Assets\1vs1\Scenes\1vs1_v2_Training.`
## Training info
- SnowballFight1vs1 was trained with 5100000 steps.
- The final ELO score was 1766.452.
### Config File
`behaviors:
SnowballFight1vs1:
trainer_type: ppo
hyperparameters:
batch_size: 2048
buffer_size: 20480
learning_rate: 0.0003
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: constant
network_settings:
normalize: false
hidden_units: 512
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.99
strength: 1.0
keep_checkpoints: 40
checkpoint_interval: 200000
max_steps: 50000000
time_horizon: 1000
summary_freq: 50000
self_play:
save_steps: 50000
team_change: 200000
swap_steps: 2000
window: 10
play_against_latest_model_ratio: 0.5
initial_elo: 1200.0
`
| 09ea56e52c4ce8af85d11a9d8962ff54 |
Helsinki-NLP/opus-mt-ga-en | Helsinki-NLP | marian | 11 | 726 | transformers | 0 | translation | true | true | false | apache-2.0 | ['ga', 'en'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,980 | false |
### gle-eng
* source group: Irish
* target group: English
* OPUS readme: [gle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md)
* model: transformer-align
* source language(s): gle
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.gle.eng | 51.6 | 0.672 |
### System Info:
- hf_name: gle-eng
- source_languages: gle
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ga', 'en']
- src_constituents: {'gle'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt
- src_alpha3: gle
- tgt_alpha3: eng
- short_pair: ga-en
- chrF2_score: 0.672
- bleu: 51.6
- brevity_penalty: 1.0
- ref_len: 11247.0
- src_name: Irish
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ga
- tgt_alpha2: en
- prefer_old: False
- long_pair: gle-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | dbf951a90f8d809e33554c8d83797c1c |
JuanAlbert/nere | JuanAlbert | null | 26 | 5 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,515 | false | ### nere on Stable Diffusion via Dreambooth
#### model by JuanAlbert
This your the Stable Diffusion model fine-tuned the nere concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **nere**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:








| 5a1eb88187a898834e527273a4b5d737 |
RamiEbeid/hubert-base-ser | RamiEbeid | hubert | 14 | 2 | transformers | 0 | null | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 5,849 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ser
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the Crema dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0105
- Accuracy: 0.6313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8106 | 0.01 | 10 | 1.7616 | 0.1974 |
| 1.7268 | 0.03 | 20 | 1.7187 | 0.2525 |
| 1.7269 | 0.04 | 30 | 1.6442 | 0.3096 |
| 1.7086 | 0.05 | 40 | 1.5834 | 0.3338 |
| 1.6983 | 0.07 | 50 | 1.6195 | 0.3600 |
| 1.5845 | 0.08 | 60 | 1.5753 | 0.3418 |
| 1.5744 | 0.09 | 70 | 1.5669 | 0.3707 |
| 1.5915 | 0.11 | 80 | 1.5412 | 0.3754 |
| 1.5105 | 0.12 | 90 | 2.0037 | 0.2612 |
| 1.4689 | 0.13 | 100 | 1.5440 | 0.3627 |
| 1.527 | 0.15 | 110 | 1.5400 | 0.3862 |
| 1.6481 | 0.16 | 120 | 1.6678 | 0.3298 |
| 1.7504 | 0.17 | 130 | 1.6078 | 0.2995 |
| 1.3748 | 0.19 | 140 | 1.5750 | 0.3251 |
| 1.6417 | 0.2 | 150 | 1.7034 | 0.2599 |
| 1.6146 | 0.21 | 160 | 1.6162 | 0.3519 |
| 1.4896 | 0.23 | 170 | 1.5245 | 0.3741 |
| 1.4278 | 0.24 | 180 | 1.7537 | 0.2424 |
| 1.4475 | 0.26 | 190 | 1.4769 | 0.3882 |
| 1.5416 | 0.27 | 200 | 1.4772 | 0.3949 |
| 1.5997 | 0.28 | 210 | 1.4428 | 0.4278 |
| 1.4337 | 0.3 | 220 | 1.4352 | 0.4124 |
| 1.415 | 0.31 | 230 | 1.4405 | 0.4157 |
| 1.5196 | 0.32 | 240 | 1.4197 | 0.4043 |
| 1.3866 | 0.34 | 250 | 1.5241 | 0.3734 |
| 1.3041 | 0.35 | 260 | 1.5703 | 0.4043 |
| 1.3618 | 0.36 | 270 | 1.3963 | 0.4285 |
| 1.3293 | 0.38 | 280 | 1.3478 | 0.4506 |
| 1.2215 | 0.39 | 290 | 1.5994 | 0.3842 |
| 1.6618 | 0.4 | 300 | 1.7751 | 0.2277 |
| 1.5349 | 0.42 | 310 | 1.6091 | 0.4036 |
| 1.4037 | 0.43 | 320 | 1.4741 | 0.4446 |
| 1.4844 | 0.44 | 330 | 1.4170 | 0.4399 |
| 1.2806 | 0.46 | 340 | 1.2887 | 0.5050 |
| 1.3818 | 0.47 | 350 | 1.2668 | 0.5017 |
| 1.3491 | 0.48 | 360 | 1.4721 | 0.4594 |
| 1.2347 | 0.5 | 370 | 1.2188 | 0.5245 |
| 1.2182 | 0.51 | 380 | 1.3813 | 0.4567 |
| 1.2513 | 0.52 | 390 | 1.2111 | 0.5205 |
| 1.2447 | 0.54 | 400 | 1.2231 | 0.5460 |
| 1.038 | 0.55 | 410 | 1.2563 | 0.5373 |
| 1.2409 | 0.56 | 420 | 1.3448 | 0.4936 |
| 1.2279 | 0.58 | 430 | 1.1972 | 0.5487 |
| 1.3256 | 0.59 | 440 | 1.1706 | 0.5742 |
| 1.2866 | 0.6 | 450 | 1.3091 | 0.5003 |
| 1.0574 | 0.62 | 460 | 1.2075 | 0.5500 |
| 1.2744 | 0.63 | 470 | 1.2831 | 0.5171 |
| 1.0836 | 0.64 | 480 | 1.1768 | 0.5608 |
| 1.135 | 0.66 | 490 | 1.1408 | 0.5776 |
| 1.1303 | 0.67 | 500 | 1.2320 | 0.5541 |
| 1.2068 | 0.69 | 510 | 1.1379 | 0.5796 |
| 1.1347 | 0.7 | 520 | 1.1124 | 0.5897 |
| 1.1846 | 0.71 | 530 | 1.1338 | 0.5803 |
| 1.2409 | 0.73 | 540 | 1.1259 | 0.5789 |
| 1.0664 | 0.74 | 550 | 1.0653 | 0.6038 |
| 1.1637 | 0.75 | 560 | 1.0550 | 0.5977 |
| 1.0707 | 0.77 | 570 | 1.0996 | 0.5715 |
| 1.2258 | 0.78 | 580 | 1.0804 | 0.5977 |
| 0.9256 | 0.79 | 590 | 1.1501 | 0.5809 |
| 1.1542 | 0.81 | 600 | 1.1089 | 0.5957 |
| 1.3931 | 0.82 | 610 | 1.1381 | 0.5856 |
| 1.1117 | 0.83 | 620 | 1.0933 | 0.6031 |
| 1.1433 | 0.85 | 630 | 1.0175 | 0.6219 |
| 1.0325 | 0.86 | 640 | 0.9885 | 0.6239 |
| 1.111 | 0.87 | 650 | 1.0048 | 0.6259 |
| 0.8125 | 0.89 | 660 | 1.0176 | 0.6165 |
| 1.0414 | 0.9 | 670 | 1.0290 | 0.6185 |
| 1.0037 | 0.91 | 680 | 1.0269 | 0.6253 |
| 0.9406 | 0.93 | 690 | 1.0301 | 0.6273 |
| 1.0129 | 0.94 | 700 | 1.0238 | 0.6326 |
| 1.2213 | 0.95 | 710 | 1.0181 | 0.6273 |
| 1.2519 | 0.97 | 720 | 1.0161 | 0.6266 |
| 0.9932 | 0.98 | 730 | 1.0112 | 0.6279 |
| 1.0135 | 0.99 | 740 | 1.0105 | 0.6313 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.5.dev0
- Tokenizers 0.11.6
| d1805d2ebfce55cd97469f26d1386635 |
sherry7144/wav2vec2-base-timit-demo-colab1 | sherry7144 | wav2vec2 | 14 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,341 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0358
- Wer: 0.5729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3217 | 13.89 | 500 | 0.8951 | 0.5834 |
| 0.2263 | 27.78 | 1000 | 1.0358 | 0.5729 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 72fff58a3796945395906c45e2760ca7 |
birdaz/sc-style | birdaz | null | 20 | 2 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 416 | false | ### sc_style Dreambooth model trained by birdaz with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 1fc2e5a89ceb546babc0f6b01c2d164b |
Helsinki-NLP/opus-mt-uk-bg | Helsinki-NLP | marian | 11 | 13 | transformers | 0 | translation | true | true | false | apache-2.0 | ['uk', 'bg'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,018 | false |
### ukr-bul
* source group: Ukrainian
* target group: Bulgarian
* OPUS readme: [ukr-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-bul/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.bul | 55.7 | 0.734 |
### System Info:
- hf_name: ukr-bul
- source_languages: ukr
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'bg']
- src_constituents: {'ukr'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: bul
- short_pair: uk-bg
- chrF2_score: 0.7340000000000001
- bleu: 55.7
- brevity_penalty: 0.976
- ref_len: 5181.0
- src_name: Ukrainian
- tgt_name: Bulgarian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: bg
- prefer_old: False
- long_pair: ukr-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 6b0852c7fe3ec21bb8936907cc7463d7 |
bigmorning/whisper_wermet_nosup_0005 | bigmorning | whisper | 7 | 3 | transformers | 0 | automatic-speech-recognition | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,976 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_wermet_nosup_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4268
- Train Accuracy: 0.0262
- Train Wermet: 23.3380
- Validation Loss: 1.2097
- Validation Accuracy: 0.0279
- Validation Wermet: 18.7331
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0860 | 0.0116 | 45.4352 | 4.4455 | 0.0124 | 36.1611 | 0 |
| 4.3098 | 0.0131 | 29.4890 | 4.0321 | 0.0144 | 24.9514 | 1 |
| 3.6711 | 0.0160 | 25.7380 | 2.7995 | 0.0205 | 32.2126 | 2 |
| 2.2582 | 0.0224 | 31.5946 | 1.6772 | 0.0257 | 23.9282 | 3 |
| 1.4268 | 0.0262 | 23.3380 | 1.2097 | 0.0279 | 18.7331 | 4 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
| 0eab862f95bd0d579f7a0c52cb4de020 |
MartinoMensio/racism-models-regression-w-m-vote-epoch-1 | MartinoMensio | bert | 4 | 4 | transformers | 0 | text-classification | true | false | false | mit | ['es'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 6,200 | false |
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `regression-w-m-vote-epoch-1`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from transformers.pipelines import TextClassificationPipeline
class TextRegressionPipeline(TextClassificationPipeline):
"""
Class based on the TextClassificationPipeline from transformers.
The difference is that instead of being based on a classifier, it is based on a regressor.
You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline.
"""
def __init__(self, **kwargs):
"""
Builds a new Pipeline based on regression.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold = kwargs.pop("regression_threshold", None)
super().__init__(**kwargs)
def __call__(self, *args, **kwargs):
"""
You can also specify the regression threshold when you call the pipeline.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold_call = kwargs.pop("regression_threshold", None)
result = super().__call__(*args, **kwargs)
return result
def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False):
outputs = model_outputs["logits"][0]
outputs = outputs.numpy()
scores = outputs
score = scores[0]
regression_threshold = self.regression_threshold
# override the specific threshold if it is specified in the call
if self.regression_threshold_call:
regression_threshold = self.regression_threshold_call
if regression_threshold:
return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score}
else:
return {"score": score}
model_name = 'regression-w-m-vote-epoch-1'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
# just get the score of regression
print(pipe(texts))
# [{'score': 0.8378907}, {'score': 0.33399782}]
# or also specify a threshold to cut racist/non-racist
print(pipe(texts, regression_threshold=0.9))
# [{'label': 'non-racist', 'score': 0.8378907}, {'label': 'non-racist', 'score': 0.33399782}]
```
For more details, see https://github.com/preyero/neatclass22
| 0266c97a1437936d48a6dbe2d639faf6 |
thaonguyen274/vit-base-patch16-224-finetuned-imageclassification | thaonguyen274 | vit | 18 | 8 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,910 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-imageclassification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1790
- Accuracy: 0.9502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 9 | 0.5791 | 0.9004 |
| 1.4122 | 2.0 | 18 | 0.2002 | 0.9359 |
| 0.3147 | 3.0 | 27 | 0.1717 | 0.9502 |
| 0.1907 | 4.0 | 36 | 0.1632 | 0.9466 |
| 0.158 | 5.0 | 45 | 0.1822 | 0.9466 |
| 0.1169 | 6.0 | 54 | 0.1778 | 0.9502 |
| 0.0984 | 7.0 | 63 | 0.1552 | 0.9573 |
| 0.0971 | 8.0 | 72 | 0.1835 | 0.9502 |
| 0.0965 | 9.0 | 81 | 0.1878 | 0.9484 |
| 0.0766 | 10.0 | 90 | 0.1790 | 0.9502 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 5401637af0e02cfd85fe117e411b3d2f |
shirayu/sd-tohoku-v1 | shirayu | null | 20 | 50 | diffusers | 11 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | true | true | 4,641 | false |
以下の5人の[東北ずん子プロジェクト](https://zunko.jp/)のキャラクターイラストを用いてDreamBoothで学習したモデルです.
- ``itako``: 東北イタコ
- ``zunko``: 東北ずん子
- ``kiritan``: 東北きりたん
- ``zundamon``: ずんだもん (人間形態)
- ``metan``: 四国めたん
学習画像はなるべく衣装にバリエーションをもたせているので,「公式衣装」は出にくいです.
🔈 キャラクターを増やして学習したモデル[shirayu/sd-tohoku-v2](https://huggingface.co/shirayu/sd-tohoku-v2)を公開しました (2023-01-04)
## ファイル形式
1. [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)などckptファイルを読み込むツールの場合
[sd-tohoku-v1.model.ckpt](https://huggingface.co/shirayu/sd-tohoku-v1/resolve/main/sd-tohoku-v1.model.ckpt)(約2GB)をダウンロードして読み込んでください
2. [diffusers](https://github.com/huggingface/diffusers)から利用する場合
```python
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("shirayu/sd-tohoku-v1")
```
## 紹介動画
<a href="https://www.nicovideo.jp/watch/sm41313614">
<img src="https://img.cdn.nimg.jp/s/nicovideo/thumbnails/41313614/41313614.80180214.original/r1280x720l?key=23adae7a647d3afa1049dc9c39204802d20870ca260b75939dd016ba127cebd8" width="500" alt="東北ずん子プロジェクトのキャラをAIお絵描き!">東北ずん子プロジェクトのキャラをAIお絵描き! (ニコニコ動画)
</a>
## ライセンス
[CreativeML Open RAIL-M license 1.0](https://hf.space/static/bigscience/license/index.html)
また,各種法令・各種ガイドラインにご留意ください.
例えば,生成された画像が東北ずん子プロジェクトのキャラクターを含む場合,
[「東北ずん子プロジェクト キャラクター利用の手引き」](https://zunko.jp/guideline.html)に基づいて利用してください.
## 学習設定
- 元モデル: [Nilaier/Waifu-Diffusers](https://huggingface.co/Nilaier/Waifu-Diffusers) (fbd1958)
- Base model: [hakurei/waifu-diffusion-v1-3](https://huggingface.co/hakurei/waifu-diffusion-v1-3)
- VAE: [hakurei/waifu-diffusion-v1-4](https://huggingface.co/hakurei/waifu-diffusion-v1-4)
- 学習画像
- 5キャラクター計69枚
- itako: 東北イタコ 18枚
- zunko: 東北ずん子 13枚
- kiritan: 東北きりたん 13枚
- zundamon: ずんだもん (人間形態) 9枚
- metan: 四国めたん 16枚
- アルファチャンネルは削除 + 白背景 + センタリング + 512x512にリサイズ
- 学習元コード: [ShivamShrirao/diffusers](https://github.com/ShivamShrirao/diffusers) (``7232c2a``)
- [``examples/dreambooth/train_dreambooth.py``](https://github.com/ShivamShrirao/diffusers/blob/7232c2a/examples/dreambooth/train_dreambooth.py)
- 学習設定
- Instance ID: ``itako``, ``kiritan``, ``zunko``, ``metan``, ``zundamon`` (5種)
- Instance prompt: ``<ID> 1girl``
- Tesla T4で約110分
- その他設定:
```txt
--prior_loss_weight=0.5 \
--seed=3434554 \
--resolution=512 \
--center_crop \
--train_batch_size=1 \
--train_text_encoder \
--mixed_precision="fp16" \
--use_8bit_adam \
--gradient_checkpointing \
--gradient_accumulation_steps=2 \
--learning_rate=1e-6 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--num_class_images=50 \
--sample_batch_size=3 \
--max_train_steps=8000
```
## 学習に使った画像
<img src="https://pbs.twimg.com/media/Ff6FF1NaMAAL8N5?format=jpg&name=small" width="500" alt="学習に使った画像">
## 生成例
<img src="https://pbs.twimg.com/media/Ff6AgzyaMAExeb3?format=png&name=900x900" width="500" alt="東北きりたんの生成例">
```txt
kiritan, 1girl, volleyball, kawaii, in gymnasium, head
Negative prompt: chibi, out of frame, armature drawing, mutated hands and fingers, poor drawing, amateur, bad painting, bad painting of arms, bad anatomy, mutation, extra limbs, ugly, fat
Steps: 40, Sampler: Euler a, CFG scale: 7.5, Seed: 575469807, Size: 704x512
```
<img src="https://pbs.twimg.com/media/Ff6Ank1aYAY7bxk?format=png&name=900x900" width="500" alt="ずんだもんの生成例">
```txt
zundamon , maid dress, in cafe, Kyoto Animation
Negative prompt: chibi, out of frame, armature drawing, mutated hands and fingers, poor drawing, amateur, bad painting, bad painting of arms, bad anatomy, mutation, extra limbs, ugly, fat
Steps: 40, Sampler: Euler a, CFG scale: 7.5, Seed: 429473516, Size: 512x704
```
<img src="https://pbs.twimg.com/media/Ff6AuXoakAAPtYa?format=png&name=900x900" width="500" alt="東北イタコの生成例">
```txt
itako, dating in park, cute winter fashion
Negative prompt: out of frame, amateur drawing, mutated hands and fingers, poor drawing, amateur, bad painting, bad painting of arms, bad anatomy, mutation, extra limbs, ugly, fat
Steps: 60, Sampler: Euler a, CFG scale: 7.5, Seed: 2722676181, Size: 704x512
```
<img src="https://pbs.twimg.com/media/Ff6A2lQakAAj1Bb?format=png&name=small" width="500" alt="東北ずん子と四国めたんの生成例">
```txt
zunko and metan sit on bench, in school uniform, drink tea, 2girls, in 2020s anime style
Negative prompt: chibi, armature drawing, mutated hands and fingers, poor drawing, amateur, bad painting, bad painting of arms, bad anatomy, mutation, extra limbs, ugly
Steps: 40, Sampler: Euler a, CFG scale: 7.5, Seed: 2262270937, Size: 640x512
```
| d8cb2de2aa4f2cb6ad6b42159a92f9fd |
TransQuest/monotransquest-hter-en_any | TransQuest | xlm-roberta | 12 | 8 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en-multilingual'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['Quality Estimation', 'monotransquest', 'HTER'] | false | true | true | 5,306 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_any", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
``` | ca266748aaa6abc78ca56435ea5257f3 |
espnet/kan-bayashi_csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave | espnet | null | 27 | 6 | espnet | 0 | text-to-speech | false | false | false | cc-by-4.0 | ['zh'] | ['csmsc'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'text-to-speech'] | false | true | true | 1,853 | false | ## ESPnet2 TTS pretrained model
### `kan-bayashi/csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5499120/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | deadc4e6e5ad5bc8597d97c940aede5d |
krlvi/sentence-t5-base-nlpl-code_search_net | krlvi | t5 | 14 | 10 | sentence-transformers | 0 | sentence-similarity | true | false | false | agpl-3.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity'] | false | true | true | 2,503 | false |
# sentence-t5-base-nlpl-code_search_net
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It has been trained on the with the [code_search_net](https://huggingface.co/datasets/code_search_net) dataset
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 58777 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | e89b93056bc8b9cd4215dff9872dd9fe |
Helsinki-NLP/opus-tatoeba-he-fr | Helsinki-NLP | marian | 12 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | ['he', 'fr'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,022 | false | ### he-fr
* source group: Hebrew
* target group: French
* OPUS readme: [heb-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-fra/README.md)
* model: transformer
* source language(s): heb
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.fra | 47.3 | 0.644 |
### System Info:
- hf_name: he-fr
- source_languages: heb
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'fr']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('French', {'fra'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-fra
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: fra
- chrF2_score: 0.644
- bleu: 47.3
- brevity_penalty: 0.9740000000000001
- ref_len: 26123.0
- src_name: Hebrew
- tgt_name: French
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: fr
- prefer_old: False
- short_pair: he-fr
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:03 | 4b9a60d9fb4e1ab15443c349325f4184 |
rufimelo/Legal-BERTimbau-large-TSDAE-v5 | rufimelo | bert | 12 | 16 | transformers | 0 | feature-extraction | true | false | false | mit | ['pt'] | ['rufimelo/PortugueseLegalSentences-v3'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'pytorch', 'tsdae'] | false | true | true | 3,828 | false |
# Legal_BERTimbau
## Introduction
Legal_BERTimbau Large is a fine-tuned BERT model based on [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
"BERTimbau Large is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/)."
The performance of Language Models can change drastically when there is a domain shift between training and test data. In order create a Portuguese Language Model adapted to a Legal domain, the original BERTimbau model was submitted to a fine-tuning stage where it was performed 1 "PreTraining" epoch over 400000 cleaned documents (lr: 1e-5, using TSDAE technique)
## Available models
| Model | Arch. | #Layers | #Params |
| ---------------------------------------- | ---------- | ------- | ------- |
|`rufimelo/Legal-BERTimbau-base` |BERT-Base |12 |110M|
| `rufimelo/Legal-BERTimbau-large` | BERT-Large | 24 | 335M |
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large-TSDAE-v3")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large-TSDAE-v3")
```
### Masked language modeling prediction example
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large-TSDAE")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large-TSDAE")
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
pipe('O advogado apresentou [MASK] para o juíz')
# [{'score': 0.5034703612327576,
#'token': 8190,
#'token_str': 'recurso',
#'sequence': 'O advogado apresentou recurso para o juíz'},
#{'score': 0.07347951829433441,
#'token': 21973,
#'token_str': 'petição',
#'sequence': 'O advogado apresentou petição para o juíz'},
#{'score': 0.05165359005331993,
#'token': 4299,
#'token_str': 'resposta',
#'sequence': 'O advogado apresentou resposta para o juíz'},
#{'score': 0.04611917585134506,
#'token': 5265,
#'token_str': 'exposição',
#'sequence': 'O advogado apresentou exposição para o juíz'},
#{'score': 0.04068068787455559,
#'token': 19737, 'token_str':
#'alegações',
#'sequence': 'O advogado apresentou alegações para o juíz'}]
```
### For BERT embeddings
```python
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-large-TSDAE')
input_ids = tokenizer.encode('O advogado apresentou recurso para o juíz', return_tensors='pt')
with torch.no_grad():
outs = model(input_ids)
encoded = outs[0][0, 1:-1]
#tensor([[ 0.0328, -0.4292, -0.6230, ..., -0.3048, -0.5674, 0.0157],
#[-0.3569, 0.3326, 0.7013, ..., -0.7778, 0.2646, 1.1310],
#[ 0.3169, 0.4333, 0.2026, ..., 1.0517, -0.1951, 0.7050],
#...,
#[-0.3648, -0.8137, -0.4764, ..., -0.2725, -0.4879, 0.6264],
#[-0.2264, -0.1821, -0.3011, ..., -0.5428, 0.1429, 0.0509],
#[-1.4617, 0.6281, -0.0625, ..., -1.2774, -0.4491, 0.3131]])
```
## Citation
If you use this work, please cite BERTimbau's work:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
```
| 4fd0300944ac174709c636c9a47ca265 |
tftransformers/albert-base-v2 | tftransformers | null | 6 | 2 | transformers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 6,472 | false |
# ALBERT Base v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
In tf_transformers
```python
from tf_transformers.models import AlbertModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertModel.from_pretrained("albert-base-v2")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
```
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 80f5bfe111798a9f38fed9597cd4424d |
pritamdeka/PubMedBert-PubMed200kRCT | pritamdeka | bert | 17 | 2 | transformers | 2 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,690 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedBert-PubMed200kRCT
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [PubMed200kRCT](https://github.com/Franck-Dernoncourt/pubmed-rct/tree/master/PubMed_200k_RCT) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2833
- Accuracy: 0.8942
## Model description
More information needed
## Intended uses & limitations
The model can be used for text classification tasks of Randomized Controlled Trials that does not have any structure. The text can be classified as one of the following:
* BACKGROUND
* CONCLUSIONS
* METHODS
* OBJECTIVE
* RESULTS
The model can be directly used like this:
```python
from transformers import TextClassificationPipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/PubMedBert-PubMed200kRCT")
tokenizer = AutoTokenizer.from_pretrained("pritamdeka/PubMedBert-PubMed200kRCT")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
pipe("Treatment of 12 healthy female subjects with CDCA for 2 days resulted in increased BAT activity.")
```
Results will be shown as follows:
```python
[[{'label': 'BACKGROUND', 'score': 0.0028450002428144217},
{'label': 'CONCLUSIONS', 'score': 0.2581048607826233},
{'label': 'METHODS', 'score': 0.015086210332810879},
{'label': 'OBJECTIVE', 'score': 0.0016815993003547192},
{'label': 'RESULTS', 'score': 0.7222822904586792}]]
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3604 | 0.14 | 5000 | 0.3162 | 0.8821 |
| 0.3326 | 0.29 | 10000 | 0.3112 | 0.8843 |
| 0.3293 | 0.43 | 15000 | 0.3044 | 0.8870 |
| 0.3246 | 0.58 | 20000 | 0.3040 | 0.8871 |
| 0.32 | 0.72 | 25000 | 0.2969 | 0.8888 |
| 0.3143 | 0.87 | 30000 | 0.2929 | 0.8903 |
| 0.3095 | 1.01 | 35000 | 0.2917 | 0.8899 |
| 0.2844 | 1.16 | 40000 | 0.2957 | 0.8886 |
| 0.2778 | 1.3 | 45000 | 0.2943 | 0.8906 |
| 0.2779 | 1.45 | 50000 | 0.2890 | 0.8935 |
| 0.2752 | 1.59 | 55000 | 0.2881 | 0.8919 |
| 0.2736 | 1.74 | 60000 | 0.2835 | 0.8944 |
| 0.2725 | 1.88 | 65000 | 0.2833 | 0.8942 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2022evidence,
title={Evidence Extraction to Validate Medical Claims in Fake News Detection},
author={Deka, Pritam and Jurek-Loughrey, Anna and others},
booktitle={International Conference on Health Information Science},
pages={3--15},
year={2022},
organization={Springer}
}
```
| ae514eb9c77e2db9fe2479b6c0ba1d5b |
salesken/clariq_gpt2 | salesken | null | 11 | 6 | null | 1 | null | true | false | true | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['salesken', 'gpt2', 'lm-head', 'causal-lm', 'salesken'] | false | true | true | 3,052 | false |
The ClariQ challenge [3] is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings:<br />
A user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers);, instead of trying to answer it directly, ask a good clarifying question.
__Query: Serve your models directly from Hugging Face infrastructure and run large scale NLP models in milliseconds with just a few lines of code__
***Top 5 clarifications generated:*** <br />
- are you looking for a suitable cloud platform to run your models on (Score: 0.3862) <br />
- are you looking for a quick test or a more complex model (Score: 0.3364) <br />
- how would you like your nlp model to be used (Score: 0.3249) <br />
- are you looking for a suitable ldl to use as a server or a client (Score: 0.3182) <br />
- how would you like to consume the nlp model (Score: 0.2842) <br />
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("salesken/clariq_gpt2")
model = AutoModelWithLMHead.from_pretrained("salesken/clariq_gpt2")
input_query="Serve your models directly from Hugging Face infrastructure and run large scale NLP models in milliseconds with just a few lines of code"
query= input_query + " ~~ "
input_ids = tokenizer.encode(query.lower(), return_tensors='pt')
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=128,
temperature=0.9,
top_k = 40,
num_return_sequences=10)
clarifications_gen = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split(' ~~ ~~')[1]
if r not in clarifications_gen:
clarifications_gen.append(r)
print(clarifications_gen)
# to select the top n results:
from sentence_transformers import SentenceTransformer, util
import torch
embedder = SentenceTransformer('paraphrase-distilroberta-base-v1')
corpus = clarifications_gen
corpus_embeddings = embedder.encode(corpus, convert_to_tensor=True)
query = input_query.lower()
query_embedding = embedder.encode(query, convert_to_tensor=True)
cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]
top_results = torch.topk(cos_scores, k=5)
print("Top clarifications generated :")
for score, idx in zip(top_results[0], top_results[1]):
print(corpus[idx], "(Score: {:.4f})".format(score))
``` | 0a2d9ee399165183b4bdcc95c0092ef7 |
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qnli | gokuls | distilbert | 17 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,640 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4295
- Accuracy: 0.6072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2848 | 1.0 | 16604 | 0.4295 | 0.6072 |
| 0.2335 | 2.0 | 33208 | 0.4441 | 0.6094 |
| 0.2245 | 3.0 | 49812 | 0.4457 | 0.6083 |
| 0.2209 | 4.0 | 66416 | 0.4434 | 0.6174 |
| 0.219 | 5.0 | 83020 | 0.4415 | 0.6152 |
| 0.2179 | 6.0 | 99624 | 0.4555 | 0.6125 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| bb57d0e4781d61bd9e0710c876634c9e |
mirfan899/t5-e2e-questions-generation | mirfan899 | t5 | 11 | 47 | transformers | 1 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,649 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-e2e-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 295 | 1.6673 |
| 1.9714 | 2.0 | 590 | 1.6021 |
| 1.9714 | 3.0 | 885 | 1.5820 |
| 1.6225 | 4.0 | 1180 | 1.5665 |
| 1.6225 | 5.0 | 1475 | 1.5643 |
| 1.5252 | 6.0 | 1770 | 1.5676 |
| 1.4558 | 7.0 | 2065 | 1.5581 |
| 1.4558 | 8.0 | 2360 | 1.5600 |
| 1.4169 | 9.0 | 2655 | 1.5604 |
| 1.4169 | 10.0 | 2950 | 1.5634 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| cc93f5b25406dd4177c9a6cab717db6c |
ArafatBHossain/bert_uncased_fine_tuned_mind | ArafatBHossain | bert | 10 | 11 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,323 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_fine_tuned_mind
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3504
- Accuracy: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6499 | 1.0 | 3054 | 0.5049 | 0.8294 |
| 0.4181 | 2.0 | 6108 | 0.3150 | 0.9005 |
| 0.2241 | 3.0 | 9162 | 0.3504 | 0.9231 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 269450281d94173b1687fc475fa2521a |
MarioPenguin/bert-model-english | MarioPenguin | bert | 4 | 5 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,919 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-model-english
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1408
- Train Sparse Categorical Accuracy: 0.9512
- Validation Loss: nan
- Validation Sparse Categorical Accuracy: 0.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.2775 | 0.8887 | nan | 0.0 | 0 |
| 0.1702 | 0.9390 | nan | 0.0 | 1 |
| 0.1300 | 0.9555 | nan | 0.0 | 2 |
| 0.1346 | 0.9544 | nan | 0.0 | 3 |
| 0.1408 | 0.9512 | nan | 0.0 | 4 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| 0ecd7dd1773cae06792c65cebde82000 |
gciaffoni/wav2vec2-large-xls-r-300m-it-colab | gciaffoni | wav2vec2 | 13 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,424 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-it-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1660
- Wer: 0.1648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.5632 | 3.19 | 1000 | 0.2289 | 0.2470 |
| 0.1489 | 6.39 | 2000 | 0.1799 | 0.1877 |
| 0.0803 | 9.58 | 3000 | 0.1660 | 0.1648 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| e6a39d7013f3c709ad579ed5f7e30548 |
antonio-artur/distilbert-base-uncased-finetuned-emotion | antonio-artur | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2280
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8646 | 1.0 | 250 | 0.3326 | 0.9045 | 0.9009 |
| 0.2663 | 2.0 | 500 | 0.2280 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
| ecda3e1758987eb00b926b9c169b70cb |
arputtick/GPT_Neo_muslim_travel | arputtick | gpt_neo | 10 | 14 | transformers | 0 | text-generation | true | false | false | openrail | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,343 | false | # GPT-Neo 1.3B - Muslim Traveler
## Model Description
GPT-Neo 1.3B-Muslim Traveler is finetuned on EleutherAI's GPT-Neo 1.3B model.
## Training data
The training data consists of travel texts written by ancient muslim travelers. See 'combined.txt' file in the model repository.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='arputtick/GPT_Neo_muslim_travel')
>>> generator("> You wake up.", do_sample=True, min_length=50)
[{'generated_text': '> You wake up"\nYou get out of bed, don your armor and get out of the door in search for new adventures.'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model is made using the following software:
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
``` | 163ab5070ce1493b19ca77cb205913eb |
shibing624/code-autocomplete-distilgpt2-python | shibing624 | gpt2 | 9 | 34 | transformers | 8 | text-generation | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['code', 'autocomplete', 'pytorch', 'en'] | false | true | true | 4,103 | false |
# GPT2 for Code AutoComplete Model
code-autocomplete, a code completion plugin for Python.
**code-autocomplete** can automatically complete the code of lines and blocks with GPT2.
## Usage
Open source repo:[code-autocomplete](https://github.com/shibing624/code-autocomplete),support GPT2 model, usage:
```python
from autocomplete.gpt2_coder import GPT2Coder
m = GPT2Coder("shibing624/code-autocomplete-distilgpt2-python")
print(m.generate('import torch.nn as')[0])
```
Also, use huggingface/transformers:
*Please use 'GPT2' related functions to load this model!*
```python
import os
from transformers import GPT2Tokenizer, GPT2LMHeadModel
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
tokenizer = GPT2Tokenizer.from_pretrained("shibing624/code-autocomplete-distilgpt2-python")
model = GPT2LMHeadModel.from_pretrained("shibing624/code-autocomplete-distilgpt2-python")
prompts = [
"""from torch import nn
class LSTM(Module):
def __init__(self, *,
n_tokens: int,
embedding_size: int,
hidden_size: int,
n_layers: int):""",
"""import numpy as np
import torch
import torch.nn as""",
"import java.util.ArrayList",
"def factorial(n):",
]
for prompt in prompts:
input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=64 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
repetition_penalty=1.0,
do_sample=True,
num_return_sequences=1,
length_penalty=2.0,
early_stopping=True)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded)
print("=" * 20)
```
output:
```shell
from torch import nn
class LSTM(Module):
def __init__(self, *,
n_tokens: int,
embedding_size: int,
hidden_size: int,
n_layers: int):
self.embedding_size = embedding_size
====================
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
```
Model files:
```
code-autocomplete-distilgpt2-python
├── config.json
├── merges.txt
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.json
```
### Train data
#### pytorch_awesome projects source code
download [code-autocomplete](https://github.com/shibing624/code-autocomplete),
```shell
cd autocomplete
python create_dataset.py
```
If you want train code-autocomplete GPT2 model,refer [https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py](https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py)
### About GPT2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Citation
```latex
@misc{code-autocomplete,
author = {Xu Ming},
title = {code-autocomplete: Code AutoComplete with GPT model},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
url = {https://github.com/shibing624/code-autocomplete},
}
```
| 58079585a95965c04d432378c66edeae |
FredZhang7/danbooru-tag-generator | FredZhang7 | gpt2 | 5 | 0 | transformers | 0 | text-generation | true | false | false | apache-2.0 | ['en'] | ['FredZhang7/anime-prompts-180K'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'anime', 'art'] | false | true | true | 595 | false | ## Disclaimer
Danbooru stores millions of tagged anime images, but it doesn't have a way to filter out NSFW content. This model was trained on 100,000 of these tags with up_score ≥ 3 for 3 epochs, so it's possible that some tags might contain NSFW descriptions.
So, just be mindful of that. Thank you for your cooperation.
## The Safe Version
For details on data preprocessing, prompt engineering, and more, please see [Fast Anime PromptGen](https://huggingface.co/FredZhang7/anime-anything-promptgen-v2).
I used a very similar approach to train the Danbooru version. | c9e2464ea494ea29b2f106d6038aba33 |
adamlin/ak-pretrain-cls-model | adamlin | bert | 20 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,541 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ak-pretrain-cls-model
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0849
- Accuracy: 0.7876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 1.3351 | 1.0 | 30469 | 0.7496 | 1.2050 |
| 1.1401 | 2.0 | 60938 | 0.7554 | 1.1823 |
| 1.0411 | 3.0 | 91407 | 0.7609 | 1.1614 |
| 0.9643 | 4.0 | 121876 | 0.7651 | 1.1544 |
| 0.8659 | 5.0 | 152345 | 0.7704 | 1.1291 |
| 0.8099 | 6.0 | 182814 | 0.7746 | 1.1237 |
| 0.7301 | 7.0 | 213283 | 0.7777 | 1.1136 |
| 0.6964 | 8.0 | 243752 | 0.7826 | 1.1106 |
| 0.6616 | 9.0 | 274221 | 0.7853 | 1.0918 |
| 0.6349 | 10.0 | 304690 | 0.7872 | 1.0876 |
| 0.6349 | 20.0 | 304700 | 1.0813 | 0.7874 |
| 0.6427 | 21.0 | 319935 | 1.1011 | 0.7841 |
| 0.6096 | 22.0 | 335170 | 1.1013 | 0.7848 |
| 0.6029 | 23.0 | 350405 | 1.1027 | 0.7859 |
| 0.5762 | 24.0 | 365640 | 1.0980 | 0.7872 |
| 0.5684 | 25.0 | 380875 | 1.1043 | 0.7873 |
| 0.5385 | 26.0 | 396110 | 1.0954 | 0.7884 |
| 0.5114 | 27.0 | 411345 | 1.0975 | 0.7897 |
| 0.499 | 28.0 | 426580 | 1.1016 | 0.7897 |
| 0.526 | 29.0 | 441815 | 1.0954 | 0.7909 |
| 0.5002 | 30.0 | 457050 | 1.0963 | 0.7913 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0
- Datasets 2.3.1
- Tokenizers 0.11.6
| 76ae422d11ff6eff7fc111801781e4e8 |
mrbalazs5/bert-finetuned-squad | mrbalazs5 | bert | 10 | 3 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,292 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mrbalazs5/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7151
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 66546, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2226 | 0 |
| 0.7151 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
| d59434646a1674dd59379c5cc52cb286 |
nanami/roberta-base-reclorft-oripaper | nanami | roberta | 13 | 0 | transformers | 0 | multiple-choice | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,791 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-reclorft-oripaper
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9383
- Accuracy: 0.5300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 194 | 1.3854 | 0.3160 |
| No log | 2.0 | 388 | 1.2143 | 0.4460 |
| 1.3074 | 3.0 | 582 | 1.1651 | 0.4860 |
| 1.3074 | 4.0 | 776 | 1.1864 | 0.5240 |
| 1.3074 | 5.0 | 970 | 1.3784 | 0.5120 |
| 0.8079 | 6.0 | 1164 | 1.5516 | 0.5160 |
| 0.8079 | 7.0 | 1358 | 1.7630 | 0.5360 |
| 0.3332 | 8.0 | 1552 | 1.8812 | 0.5300 |
| 0.3332 | 9.0 | 1746 | 1.8399 | 0.5300 |
| 0.3332 | 10.0 | 1940 | 1.9383 | 0.5300 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| abcf82aef002a2db73523f82990c1650 |
muhtasham/small-mlm-glue-qqp-target-glue-rte | muhtasham | bert | 10 | 7 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,562 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-qqp-target-glue-rte
This model is a fine-tuned version of [muhtasham/small-mlm-glue-qqp](https://huggingface.co/muhtasham/small-mlm-glue-qqp) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5654
- Accuracy: 0.5884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4049 | 6.41 | 500 | 1.4190 | 0.6137 |
| 0.059 | 12.82 | 1000 | 2.5418 | 0.5776 |
| 0.0311 | 19.23 | 1500 | 2.6870 | 0.6318 |
| 0.0192 | 25.64 | 2000 | 3.0283 | 0.6318 |
| 0.0166 | 32.05 | 2500 | 3.5273 | 0.5921 |
| 0.0145 | 38.46 | 3000 | 3.5654 | 0.5884 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 5e9a0963cd6802ca2d628c0b1a57dc97 |
marccgrau/whisper-small-allSNR-v6 | marccgrau | whisper | 13 | 1 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['marccgrau/sbbdata_allSNR'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sbb-asr', 'generated_from_trainer'] | true | true | true | 1,478 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small German SBB all SNR - v6
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the SBB Dataset 05.01.2023 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0426
- Wer: 0.0266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7233 | 0.04 | 100 | 0.4161 | 0.2232 |
| 0.1932 | 0.09 | 200 | 0.0665 | 0.0361 |
| 0.0615 | 0.13 | 300 | 0.0666 | 0.0361 |
| 0.0677 | 0.18 | 400 | 0.0426 | 0.0266 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.12.1
| 6c931f968a65e149c8508523f8184652 |
cogitur/xlm-roberta-base-finetuned-panx-fr | cogitur | xlm-roberta | 10 | 3 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2748
- F1: 0.8406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5754 | 1.0 | 191 | 0.3221 | 0.7950 |
| 0.2607 | 2.0 | 382 | 0.2888 | 0.8225 |
| 0.1751 | 3.0 | 573 | 0.2748 | 0.8406 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| fa6eefc2e148f30aa02a7c64cdb0dd20 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-6 | SetFit | distilbert | 10 | 8 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,339 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0523
- Accuracy: 0.663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0957 | 1.0 | 19 | 1.0696 | 0.6 |
| 1.0107 | 2.0 | 38 | 1.0047 | 0.55 |
| 0.8257 | 3.0 | 57 | 0.8358 | 0.8 |
| 0.6006 | 4.0 | 76 | 0.7641 | 0.6 |
| 0.4172 | 5.0 | 95 | 0.5931 | 0.8 |
| 0.2639 | 6.0 | 114 | 0.5570 | 0.7 |
| 0.1314 | 7.0 | 133 | 0.5017 | 0.65 |
| 0.0503 | 8.0 | 152 | 0.3115 | 0.75 |
| 0.023 | 9.0 | 171 | 0.4353 | 0.85 |
| 0.0128 | 10.0 | 190 | 0.5461 | 0.75 |
| 0.0092 | 11.0 | 209 | 0.5045 | 0.8 |
| 0.007 | 12.0 | 228 | 0.5014 | 0.8 |
| 0.0064 | 13.0 | 247 | 0.5070 | 0.8 |
| 0.0049 | 14.0 | 266 | 0.4681 | 0.8 |
| 0.0044 | 15.0 | 285 | 0.4701 | 0.8 |
| 0.0039 | 16.0 | 304 | 0.4862 | 0.8 |
| 0.0036 | 17.0 | 323 | 0.4742 | 0.8 |
| 0.0035 | 18.0 | 342 | 0.4652 | 0.8 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| cf7c6a9a323bdc2f4fd5f8af62d78373 |
sd-concepts-library/s1m-naoto-ohshima | sd-concepts-library | null | 32 | 0 | null | 2 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,938 | false | ### s1m-naoto-ohshima on Stable Diffusion
This is the `<s1m-naoto-ohshima>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:



























| 3cc843b21d8370b4205c2bc6d67f11c6 |
superb/wav2vec2-large-superb-sid | superb | wav2vec2 | 5 | 9 | transformers | 0 | audio-classification | true | false | false | apache-2.0 | ['en'] | ['superb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['speech', 'audio', 'wav2vec2', 'audio-classification'] | false | true | true | 3,060 | false |
# Wav2Vec2-Large for Speaker Identification
## Model description
This is a ported version of
[S3PRL's Wav2Vec2 for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1).
The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class
classification, where speakers are in the same predefined set for both training and testing. The widely
used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
classifier = pipeline("audio-classification", model="superb/wav2vec2-large-superb-sid")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
dataset = dataset.map(map_to_array)
model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-sid")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-sid")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.8614` | `0.8613` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` | df28e17829df05ce6f2f2dd89b43857a |
itsGanni/IPod-clustered | itsGanni | distilbert | 8 | 0 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,846 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/IPod-clustered
This model is a fine-tuned version of [nandysoham/15-clustered](https://huggingface.co/nandysoham/15-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6524
- Train End Logits Accuracy: 0.7708
- Train Start Logits Accuracy: 0.8125
- Validation Loss: 0.2740
- Validation End Logits Accuracy: 0.8636
- Validation Start Logits Accuracy: 0.8636
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.6524 | 0.7708 | 0.8125 | 0.2740 | 0.8636 | 0.8636 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| ca671722ccfa8b377c454698c04fa2fa |
kbamponsem/distilbert-base-uncased-finetuned-cola | kbamponsem | distilbert | 13 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,572 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7110
- Matthews Correlation: -0.0126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| -0.0155 | 1.0 | 535 | 0.7110 | -0.0126 |
| 0.0431 | 2.0 | 1070 | 0.7110 | -0.0126 |
| -0.0076 | 3.0 | 1605 | 0.7110 | -0.0126 |
| 0.0227 | 4.0 | 2140 | 0.7110 | -0.0126 |
| -0.0648 | 5.0 | 2675 | 0.7110 | -0.0126 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 76bccb2ac7e87d175a159043eed163d1 |
Helsinki-NLP/opus-mt-ja-bg | Helsinki-NLP | marian | 11 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | ['ja', 'bg'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,141 | false |
### jpn-bul
* source group: Japanese
* target group: Bulgarian
* OPUS readme: [jpn-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-bul/README.md)
* model: transformer-align
* source language(s): jpn jpn_Hani jpn_Hira jpn_Kana
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.bul | 20.2 | 0.422 |
### System Info:
- hf_name: jpn-bul
- source_languages: jpn
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'bg']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: bul
- short_pair: ja-bg
- chrF2_score: 0.42200000000000004
- bleu: 20.2
- brevity_penalty: 0.9570000000000001
- ref_len: 2346.0
- src_name: Japanese
- tgt_name: Bulgarian
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: bg
- prefer_old: False
- long_pair: jpn-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 5728227bf05d1f9f542f3179d5357f87 |
Helsinki-NLP/opus-mt-no-de | Helsinki-NLP | marian | 11 | 3,374 | transformers | 0 | translation | true | true | false | apache-2.0 | [False, 'de'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,994 | false |
### nor-deu
* source group: Norwegian
* target group: German
* OPUS readme: [nor-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.deu | 29.6 | 0.541 |
### System Info:
- hf_name: nor-deu
- source_languages: nor
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'de']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: deu
- short_pair: no-de
- chrF2_score: 0.541
- bleu: 29.6
- brevity_penalty: 0.96
- ref_len: 34575.0
- src_name: Norwegian
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: de
- prefer_old: False
- long_pair: nor-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 1c2f3197a46217e65a46930d44e00237 |
sd-concepts-library/cow-uwu | sd-concepts-library | null | 8 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 896 | false | ### cow uwu on Stable Diffusion
This is the `<cow-uwu>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:



| 9039ecfce5c76354e7f36d9db017bfc3 |
Kuray107/RATS_clean | Kuray107 | wav2vec2 | 27 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,095 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RATS_clean
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2556
- Wer: 0.1206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0722 | 0.42 | 1000 | 0.4266 | 0.1723 |
| 0.7509 | 0.85 | 2000 | 0.3578 | 0.1629 |
| 0.6628 | 1.27 | 3000 | 0.3660 | 0.1594 |
| 0.6246 | 1.7 | 4000 | 0.3533 | 0.1537 |
| 0.5796 | 2.12 | 5000 | 0.2982 | 0.1450 |
| 0.5559 | 2.55 | 6000 | 0.3127 | 0.1473 |
| 0.5473 | 2.97 | 7000 | 0.2835 | 0.1453 |
| 0.5142 | 3.4 | 8000 | 0.2749 | 0.1419 |
| 0.4987 | 3.82 | 9000 | 0.2575 | 0.1378 |
| 0.4795 | 4.25 | 10000 | 0.2669 | 0.1367 |
| 0.475 | 4.67 | 11000 | 0.2594 | 0.1367 |
| 0.4597 | 5.1 | 12000 | 0.2773 | 0.1360 |
| 0.4393 | 5.52 | 13000 | 0.2618 | 0.1346 |
| 0.4387 | 5.95 | 14000 | 0.2548 | 0.1401 |
| 0.4216 | 6.37 | 15000 | 0.2516 | 0.1341 |
| 0.4271 | 6.8 | 16000 | 0.2530 | 0.1333 |
| 0.4079 | 7.22 | 17000 | 0.2757 | 0.1334 |
| 0.3992 | 7.65 | 18000 | 0.2724 | 0.1300 |
| 0.3947 | 8.07 | 19000 | 0.2675 | 0.1308 |
| 0.3769 | 8.5 | 20000 | 0.2543 | 0.1292 |
| 0.3764 | 8.92 | 21000 | 0.2464 | 0.1274 |
| 0.3708 | 9.35 | 22000 | 0.2616 | 0.1302 |
| 0.3581 | 9.77 | 23000 | 0.2532 | 0.1283 |
| 0.3513 | 10.2 | 24000 | 0.2707 | 0.1245 |
| 0.3443 | 10.62 | 25000 | 0.2594 | 0.1284 |
| 0.3502 | 11.05 | 26000 | 0.2768 | 0.1245 |
| 0.3384 | 11.47 | 27000 | 0.2537 | 0.1288 |
| 0.3291 | 11.89 | 28000 | 0.2582 | 0.1272 |
| 0.3291 | 12.32 | 29000 | 0.2621 | 0.1271 |
| 0.3217 | 12.74 | 30000 | 0.2522 | 0.1297 |
| 0.3151 | 13.17 | 31000 | 0.2544 | 0.1286 |
| 0.3081 | 13.59 | 32000 | 0.2663 | 0.1272 |
| 0.3125 | 14.02 | 33000 | 0.2519 | 0.1275 |
| 0.293 | 14.44 | 34000 | 0.2407 | 0.1279 |
| 0.3032 | 14.87 | 35000 | 0.2515 | 0.1231 |
| 0.296 | 15.29 | 36000 | 0.2597 | 0.1218 |
| 0.2969 | 15.72 | 37000 | 0.2625 | 0.1257 |
| 0.2837 | 16.14 | 38000 | 0.2674 | 0.1272 |
| 0.2902 | 16.57 | 39000 | 0.2619 | 0.1225 |
| 0.2804 | 16.99 | 40000 | 0.2606 | 0.1238 |
| 0.2787 | 17.42 | 41000 | 0.2598 | 0.1229 |
| 0.2811 | 17.84 | 42000 | 0.2569 | 0.1221 |
| 0.2766 | 18.27 | 43000 | 0.2547 | 0.1214 |
| 0.2728 | 18.69 | 44000 | 0.2548 | 0.1213 |
| 0.2759 | 19.12 | 45000 | 0.2572 | 0.1215 |
| 0.268 | 19.54 | 46000 | 0.2559 | 0.1213 |
| 0.2721 | 19.97 | 47000 | 0.2556 | 0.1206 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2
| 714c31662ac52414529eab0d18e1909e |
Intel/bert-mini-sst2-distilled-sparse-90-1X4-block | Intel | bert | 11 | 57 | transformers | 1 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 600 | false |
# Sparse BERT mini model (uncased)
Finetuned model pruned to 1:4 structured sparsity.
The model is a pruned version of the [BERT mini model](https://huggingface.co/prajjwal1/bert-mini).
## Intended Use
The model can be used for inference with sparsity optimization.
For further details on the model and its usage will be soon available.
## Evaluation Results
We get the following results on the sst2 tasks development set:
| Task | SST-2 (Acc) |
|------|-------------|
| | 87.2 |
Better than dense [bert mini](https://huggingface.co/M-FAC/bert-mini-finetuned-sst2) which is 84.74%.
| c1efe792105fb7beebf3cca044f48fe3 |
aretw0/t5-small-finetuned-en-to-ro-dataset_20-input_64 | aretw0 | t5 | 12 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['wmt16'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,286 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20-input_64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4335
- Bleu: 8.6652
- Gen Len: 18.2596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6351 | 1.0 | 7629 | 1.4335 | 8.6652 | 18.2596 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| c78d284b13edda19d1d24aa704ae74fe |
daspartho/subreddit-predictor | daspartho | distilbert | 9 | 8 | transformers | 1 | text-classification | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,139 | false |
An NLP model that predicts subreddit based on the title of a post.
### Training
DistilBERT is fine-tuned on [subreddit-posts](https://huggingface.co/datasets/daspartho/subreddit-posts), a dataset of titles of the top 1000 posts from the top 250 subreddits.
For steps to make the model check out the [model](https://github.com/daspartho/predict-subreddit/blob/main/model.ipynb) notebook in the github repo or open in [Colab](https://colab.research.google.com/github/daspartho/predict-subreddit/blob/main/model.ipynb).
### Limitations and bias
- Since the model is trained on top 250 subreddits ([for reference](http://redditlist.com/)) therefore it can only categorise within those subreddits.
- Some subreddits have a specific format for their post title, like [r/todayilearned](https://www.reddit.com/r/todayilearned) where post title starts with "TIL" so the model becomes biased towards "TIL" --> r/todayilearned. This can be removed by cleaning the dataset of these specific terms.
- In some subreddit like [r/gifs](https://www.reddit.com/r/gifs/), the title of the post doesn't matter much, so the model performs poorly on them. | c279f7606b556f0cbdc6cd36bb19cd73 |
RichVip/Cute_RichStyle_2 | RichVip | null | 5 | 0 | null | 5 | null | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['cartoon', 'CHARACTER', 'BABY', 'BABIES', 'LITTLE', 'SD1.5', 'DIGITAL ART', 'CUTE', 'MIDJOURNEY', 'DOLLS'] | false | true | true | 3,698 | false |
# Cute RichStyle - 768x768
Model trained in SD 2.1 with photos generated with Midjourney, created to generate people, animals/creatures...
You can also make objects... landscapes, etc, but maybe you need more tries:
- 30 steps - 7cfg
- euler a,ddim, dpm++sde...
- you can use different resolutions, you can generate interesting things
Characters rendered with the model:
.jpg)
.jpg)
**TOKEN**: cbzbb, cbzbb style, cbzbb style of _____ , you can put the token , (it is not required) but it is better to put it. Many times the token between () works better
possible positives: cute, little, baby, beautiful, fantasy art, devian art, trending artstation, digital art, detailed, cute, realistic, humanoide, character, tiny, film still of "____" , cinematic shot , "__" environment, beautiful landspace of _____, cinematic portrait of ______, cute character as a "_"....
If you want to make it less realistic, put the word: character in positive prompt
most important negatives (not mandatory but they help a lot) : pencil draw, bad photo, bad draw
other possible negatives: cartoon, woman, man, person, people, character, super hero, iron man, baby, anime...
((When you generate the photo, there are times when it tries to create a person/character, that's why the negative character prompts etc...))
- landscape prompts better between ( ) or more parentheses, although it is not always necessary
- you can use other styles, removing the "cbzbb" token and adding pencil draw, lego style.. watercolor etc etc, it doesn't make the exact photo style with which I trained it but they look great too!!
- Most of the photos are daytime, to create nights it once worked with:
- positive: (dark), (black sky) (dark sky) etc etc
- negative: (blue day), (day light), (day) (sun) etc etc
- To increase quality: send the photo that you like the most to img2img (30-steps), 0.60-80, generate 4 photos, choose one or repeat (with less donoising to make it look more like the original, or more to make it change more ), resend via img2img (you can raise the ratio/aspect of the image a bit), lower the denoising to 0.40-0.50, generate 2/4 images, choose the one you like the most and have more detail, send to img2img uploading the photo scale (same ratio/aspect,) and at 0.15-0.30 50 steps, generate 1 photo, if you want you can continue rescaling it for more detail and more resolution
- Change person/character in the image: if you like the photo but want to change the character, send a photo to img2img, change the name of the character or person or animal and between 0.7-1 denoising
**Prompt examples:**
cbzbb style of a pennywise
michael jackson, cbzbb, detailed, fantasy,super cute, trending on artstation
cbzbb style of angry baby groot
cute panda reading a book, cbzbb style
## ENJOY !!!!
The creations of the images are absolutely yours! But if you can share them with me on Twitter or Instagram or reddit, anywhere , I'd LOVE to SEE what you can do with the model!
- **Twitter:** @RichViip
- **Instagram**: richviip
- **Reddit:** Richviip
Thank you for the support and great help of ALL the people on Patricio's discord, who were at every moment of the creation of the model giving their opinion on more than 15 different types of models and making my head hurt less!
Social media of Patricio, follow him!!
- **Youtube:** patricio-fernandez
- **Twitter:** patriciofernanf | 9ac6ba7b769f821a53f9b0981ab15141 |
DylanonWic/wav2vec2-large-asr-th | DylanonWic | wav2vec2 | 13 | 17 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,917 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-asr-th
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7110
- Wer: 0.418 ±5%
- Cer: 0.15 ±5%
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 3300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.5509 | 2.35 | 200 | 3.5608 | 1.0 | 1.0000 |
| 3.507 | 4.71 | 400 | 3.4854 | 1.0 | 1.0000 |
| 3.3614 | 7.06 | 600 | 3.2711 | 1.0 | 1.0000 |
| 1.5151 | 9.41 | 800 | 1.1078 | 0.7485 | 0.8674 |
| 0.9279 | 11.76 | 1000 | 0.7934 | 0.6052 | 0.8534 |
| 0.7193 | 14.12 | 1200 | 0.7220 | 0.5466 | 0.8491 |
| 0.5668 | 16.47 | 1400 | 0.6828 | 0.5127 | 0.8459 |
| 0.4963 | 18.82 | 1600 | 0.6487 | 0.5071 | 0.8451 |
| 0.4301 | 21.18 | 1800 | 0.6668 | 0.4946 | 0.8442 |
| 0.3881 | 23.53 | 2000 | 0.6685 | 0.4806 | 0.8434 |
| 0.3628 | 25.88 | 2200 | 0.6911 | 0.4836 | 0.8433 |
| 0.3711 | 28.24 | 2400 | 0.7008 | 0.4795 | 0.8430 |
| 0.351 | 30.59 | 2600 | 0.6974 | 0.4697 | 0.8424 |
| 0.2799 | 32.94 | 2800 | 0.7090 | 0.4705 | 0.8421 |
| 0.2814 | 35.29 | 3000 | 0.7110 | 0.4690 | 0.8418 |
| 0.2707 | 37.65 | 3200 | 0.7090 | 0.4681 | 0.8418 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2 | 8e41871cd3732d44e7ab5bd0a31848cf |
anas-awadalla/bart-large-few-shot-k-256-finetuned-squad-infilling-seed-4 | anas-awadalla | bart | 18 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 968 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-few-shot-k-256-finetuned-squad-infilling-seed-4
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| 385a57195bc90d558e14135f495ce639 |
aisingapore/coherence-momentum | aisingapore | null | 4 | 0 | transformers | 0 | feature-extraction | true | false | false | mit | ['en'] | ['wall-street-journal'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['coherence', 'feature-extraction'] | true | true | true | 5,280 | false |
# Coherence Modelling
You can **test the model** at [coherence modelling](https://huggingface.co/spaces/aisingapore/coherence-modelling).<br />
If you want to find out more information, please contact us at [email protected].
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Model Parameters](#parameters)
- [Other Information](#other-information)
## Model Details
**Model Name:** Coherence-Momentum
- **Description:** This is a neural network model that makes use of a momentum encoder and hard negative mining during training. This model is able to take in a piece of text and output a coherence score. The coherence score is only meant for comparison, i.e. it is only meaningful when used to compare between two texts, and the text with the higher coherence score is deemed to be more coherent by the model.
- **Paper:** Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), May 2022 (pp. 6044-6059).
- **Author(s):** Jwalapuram, P., Joty, S., & Lin, X. (2022).
- **URL:** https://aclanthology.org/2022.acl-long.418/
# How to Get Started With the Model
## Install Python package
SGnlp is an initiative by AI Singapore's NLP Hub. They aim to bridge the gap between research and industry, promote translational research, and encourage adoption of NLP techniques in the industry. <br><br> Various NLP models, other than aspect sentiment analysis are available in the python package. You can try them out at [SGNLP-Demo](https://sgnlp.aisingapore.net/) | [SGNLP-Github](https://github.com/aisingapore/sgnlp).
```python
pip install sgnlp
```
## Examples
For more full code (such as Coherence-Momentum), please refer to this [github](https://github.com/aisingapore/sgnlp). <br> Alternatively, you can also try out the [demo](https://huggingface.co/spaces/aisingapore/coherence-modelling) for Coherence-Momentum.
Example of Coherence Momentum modelling:
```python
from sgnlp.models.coherence_momentum import CoherenceMomentumModel, CoherenceMomentumConfig, \
CoherenceMomentumPreprocessor
# Load Model
config = CoherenceMomentumConfig.from_pretrained(
"https://storage.googleapis.com/sgnlp/models/coherence_momentum/config.json"
)
model = CoherenceMomentumModel.from_pretrained(
"https://storage.googleapis.com/sgnlp/models/coherence_momentum/pytorch_model.bin",
config=config
)
preprocessor = CoherenceMomentumPreprocessor(config.model_size, config.max_len)
# Example text inputs
text1 = "Companies listed below reported quarterly profit substantially different from the average of analysts ' " \
"estimates . The companies are followed by at least three analysts , and had a minimum five-cent change in " \
"actual earnings per share . Estimated and actual results involving losses are omitted . The percent " \
"difference compares actual profit with the 30-day estimate where at least three analysts have issues " \
"forecasts in the past 30 days . Otherwise , actual profit is compared with the 300-day estimate . " \
"Source : Zacks Investment Research"
text2 = "The companies are followed by at least three analysts , and had a minimum five-cent change in actual " \
"earnings per share . The percent difference compares actual profit with the 30-day estimate where at least " \
"three analysts have issues forecasts in the past 30 days . Otherwise , actual profit is compared with the " \
"300-day estimate . Source : Zacks Investment Research. Companies listed below reported quarterly profit " \
"substantially different from the average of analysts ' estimates . Estimated and actual results involving " \
"losses are omitted ."
text1_tensor = preprocessor([text1])
text2_tensor = preprocessor([text2])
text1_score = model.get_main_score(text1_tensor["tokenized_texts"]).item()
text2_score = model.get_main_score(text2_tensor["tokenized_texts"]).item()
print(text1_score, text2_score)
```
# Training
The training datasets can be retrieved from Permuted dataset derived from Linguistic Data Consortium's (LDC) Wall Street Journal (WSJ) dataset.
Please contact the authors to get the dataset if you have a valid LDC license.
#### Training Results
- **Training Time:** ~24 hours for ~46000 steps (batch size of 1) on a single A100 GPU
- **Datasets:** Permuted dataset derived from Linguistic Data Consortium's (LDC) Wall Street Journal (WSJ) dataset.
- **Training Config:** [link](https://storage.googleapis.com/sgnlp/models/coherence_momentum/config.json)
# Model Parameters
- **Model Weights:** [link](https://storage.googleapis.com/sgnlp/models/coherence_momentum/pytorch_model.bin)
- **Model Inputs:** A paragraph of text. During training, each positive example can be paired with one or more negative examples.
- **Model Outputs:** Coherence score for the input text.
- **Model Size:** ~930MB
- **Model Inference Info:** Not available.
- **Usage Scenarios:** Essay scoring, summarization, language generation.
# Other Information
- **Original Code:** [link](https://github.com/ntunlp/coherence-paradigm)
| 9e08aa402a9ef623db50993659eb7466 |
m3hrdadfi/albert-fa-base-v2 | m3hrdadfi | albert | 6 | 56 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['fa'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['albert-persian', 'persian-lm'] | false | true | true | 7,052 | false |
# ALBERT-Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
## Introduction
ALBERT-Persian trained on a massive amount of public corpora ([Persian Wikidumps](https://dumps.wikimedia.org/fawiki/), [MirasText](https://github.com/miras-tech/MirasText)) and six other manually crawled text data from a various type of websites ([BigBang Page](https://bigbangpage.com/) `scientific`, [Chetor](https://www.chetor.com/) `lifestyle`, [Eligasht](https://www.eligasht.com/Blog/) `itinerary`, [Digikala](https://www.digikala.com/mag/) `digital magazine`, [Ted Talks](https://www.ted.com/talks) `general conversational`, Books `novels, storybooks, short stories from old to the contemporary era`).
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=albert-fa) to look for
fine-tuned versions on a task that interests you.
### How to use
- for using any type of Albert you have to install sentencepiece
- run this in your notebook ``` !pip install -q sentencepiece ```
#### TensorFlow 2.0
```python
from transformers import AutoConfig, AutoTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("m3hrdadfi/albert-fa-base-v2")
tokenizer = AutoTokenizer.from_pretrained("m3hrdadfi/albert-fa-base-v2")
model = TFAutoModel.from_pretrained("m3hrdadfi/albert-fa-base-v2")
text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد میتوانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است."
tokenizer.tokenize(text)
>>> ['▁ما', '▁در', '▁هوش', 'واره', '▁معتقد', 'یم', '▁با', '▁انتقال', '▁صحیح', '▁دانش', '▁و', '▁اگاه', 'ی', '،', '▁همه', '▁افراد', '▁می', '▁توانند', '▁از', '▁ابزارهای', '▁هوشمند', '▁استفاده', '▁کنند', '.', '▁شعار', '▁ما', '▁هوش', '▁مصنوعی', '▁برای', '▁همه', '▁است', '.']
```
#### Pytorch
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
config = AutoConfig.from_pretrained("m3hrdadfi/albert-fa-base-v2")
tokenizer = AutoTokenizer.from_pretrained("m3hrdadfi/albert-fa-base-v2")
model = AutoModel.from_pretrained("m3hrdadfi/albert-fa-base-v2")
```
## Training
ALBERT-Persian is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than `3.9M` documents, `73M` sentences, and `1.3B` words, like the way we did for [ParsBERT](https://github.com/hooshvare/parsbert).
## Goals
Objective goals during training are as below (after 140K steps).
``` bash
***** Eval results *****
global_step = 140000
loss = 2.0080082
masked_lm_accuracy = 0.6141017
masked_lm_loss = 1.9963315
sentence_order_accuracy = 0.985
sentence_order_loss = 0.06908702
```
## Derivative models
### Base Config
#### Albert Model
- [m3hrdadfi/albert-face-base-v2](https://huggingface.co/m3hrdadfi/albert-fa-base-v2)
#### Albert Sentiment Analysis
- [m3hrdadfi/albert-fa-base-v2-sentiment-digikala](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-digikala)
- [m3hrdadfi/albert-fa-base-v2-sentiment-snappfood](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-snappfood)
- [m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary)
- [m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi)
- [m3hrdadfi/albert-fa-base-v2-sentiment-binary](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-binary)
- [m3hrdadfi/albert-fa-base-v2-sentiment-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-multi)
- [m3hrdadfi/albert-fa-base-v2-sentiment-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-multi)
#### Albert Text Classification
- [m3hrdadfi/albert-fa-base-v2-clf-digimag](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-clf-digimag)
- [m3hrdadfi/albert-fa-base-v2-clf-persiannews](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-clf-persiannews)
#### Albert NER
- [m3hrdadfi/albert-fa-base-v2-ner](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner)
- [m3hrdadfi/albert-fa-base-v2-ner-arman](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner-arman)
- [m3hrdadfi/albert-fa-base-v2-ner-arman](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner-arman)
## Eval results
The following tables summarize the F1 scores obtained by ALBERT-Persian as compared to other models and architectures.
### Sentiment Analysis (SA) Task
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.12 | 81.74 | 80.74 | - |
| SnappFood User Comments | 85.79 | 88.12 | 87.87 | - |
| SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 |
### Text Classification (TC) Task
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT |
|:-----------------:|:-----------------:|:-----------:|:-----:|
| Digikala Magazine | 92.33 | 93.59 | 90.72 |
| Persian News | 97.01 | 97.19 | 95.79 |
### Named Entity Recognition (NER) Task
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:|
| PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - |
| ARMAN | 97.43 | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERT-Persian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
| 11788dab97f72d3545a29f3b0219933b |
yip-i/wav2vec2-base-pre-finetune | yip-i | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 986 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-pre-finetune
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.1
| 73cbd8956411799f0f9f1a5d9fa96b88 |
csmartins8/xlm-roberta-base-finetuned-panx-de | csmartins8 | xlm-roberta | 15 | 4 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1374
- F1: 0.8632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1594 | 0.8198 |
| 0.125 | 2.0 | 1050 | 0.1390 | 0.8483 |
| 0.08 | 3.0 | 1575 | 0.1374 | 0.8632 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| a6187a462c10d88400e2f052f6dd85af |
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI | IDEA-CCNL | megatron-bert | 5 | 78 | transformers | 2 | text-classification | true | false | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['bert', 'NLU', 'NLI'] | false | true | true | 2,977 | false | # Erlangshen-MegatronBert-1.3B-NLI
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
2021年登顶FewCLUE和ZeroCLUE的中文BERT,在数个推理任务微调后的版本
This is the fine-tuned version of the Chinese BERT model on several NLI datasets, which topped FewCLUE and ZeroCLUE benchmark in 2021
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | MegatronBert | 1.3B | 自然语言推断 NLI |
## 模型信息 Model Information
基于[Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B),我们在收集的4个中文领域的NLI(自然语言推理)数据集,总计1014787个样本上微调了一个NLI版本。
Based on [Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B), we fine-tuned a NLI version on 4 Chinese Natural Language Inference (NLI) datasets, with totaling 1,014,787 samples.
### 下游效果 Performance
| 模型 Model | cmnli | ocnli | snli |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-NLI | 80.83 | 78.56 | 88.01 |
| Erlangshen-Roberta-330M-NLI | 82.25 | 79.82 | 88.00 |
| Erlangshen-MegatronBert-1.3B-NLI | 84.52 | 84.17 | 88.67 |
## 使用 Usage
``` python
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI')
model=AutoModelForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI')
texta='今天的饭不好吃'
textb='今天心情不好'
output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | b5db2f97840e275127f8f1bb252bfb22 |
Helsinki-NLP/opus-mt-eo-ro | Helsinki-NLP | marian | 11 | 15 | transformers | 0 | translation | true | true | false | apache-2.0 | ['eo', 'ro'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,000 | false |
### epo-ron
* source group: Esperanto
* target group: Romanian
* OPUS readme: [epo-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ron/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ron | 19.4 | 0.420 |
### System Info:
- hf_name: epo-ron
- source_languages: epo
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'ro']
- src_constituents: {'epo'}
- tgt_constituents: {'ron'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ron
- short_pair: eo-ro
- chrF2_score: 0.42
- bleu: 19.4
- brevity_penalty: 0.9179999999999999
- ref_len: 25619.0
- src_name: Esperanto
- tgt_name: Romanian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: ro
- prefer_old: False
- long_pair: epo-ron
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | aac776bcab77528e26e9f96ec336ee22 |
aychang/roberta-base-imdb | aychang | roberta | 11 | 580 | transformers | 2 | text-classification | true | false | true | mit | ['en'] | ['imdb'] | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 | ['text-classification'] | false | true | true | 2,044 | false |
# IMDB Sentiment Task: roberta-base
## Model description
A simple base roBERTa model trained on the "imdb" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/roberta-base-imdb"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/roberta-base-imdb"
texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
IMDB https://huggingface.co/datasets/imdb
## Training procedure
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=800,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.94668,
'eval_f1': array([0.94603457, 0.94731017]),
'eval_loss': 0.2578844428062439,
'eval_precision': array([0.95762642, 0.93624502]),
'eval_recall': array([0.93472, 0.95864]),
'eval_runtime': 244.7522,
'eval_samples_per_second': 102.144}
```
| e7566bd5289a75ab52acae41581afa31 |
facebook/wav2vec2-large-xlsr-53-italian | facebook | wav2vec2 | 9 | 232 | transformers | 2 | automatic-speech-recognition | true | false | true | apache-2.0 | ['it'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['speech', 'audio', 'automatic-speech-recognition'] | false | true | true | 1,730 | false |
## Evaluation on Common Voice IT Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "facebook/wav2vec2-large-xlsr-53-italian"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "it", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 22.1 % | eb3497870f7218221102bef13abe4690 |
anas-awadalla/bart-large-few-shot-k-64-finetuned-squad-infilling-seed-0 | anas-awadalla | bart | 16 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 971 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-few-shot-k-64-finetuned-squad-infilling-seed-0
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| d09370dd0cdcd7b127ccaeb313f5aa04 |
Subsets and Splits