modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
SetFit/deberta-v3-large__sst2__train-8-5
|
SetFit
| 2022-02-10T09:23:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-5
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3078
- Accuracy: 0.6930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6813 | 1.0 | 3 | 0.7842 | 0.25 |
| 0.6617 | 2.0 | 6 | 0.7968 | 0.25 |
| 0.6945 | 3.0 | 9 | 0.7746 | 0.25 |
| 0.5967 | 4.0 | 12 | 0.7557 | 0.25 |
| 0.4824 | 5.0 | 15 | 0.6920 | 0.25 |
| 0.3037 | 6.0 | 18 | 0.6958 | 0.5 |
| 0.2329 | 7.0 | 21 | 0.6736 | 0.5 |
| 0.1441 | 8.0 | 24 | 0.3749 | 1.0 |
| 0.0875 | 9.0 | 27 | 0.3263 | 0.75 |
| 0.0655 | 10.0 | 30 | 0.3525 | 0.75 |
| 0.0373 | 11.0 | 33 | 0.1993 | 1.0 |
| 0.0173 | 12.0 | 36 | 0.1396 | 1.0 |
| 0.0147 | 13.0 | 39 | 0.0655 | 1.0 |
| 0.0084 | 14.0 | 42 | 0.0343 | 1.0 |
| 0.0049 | 15.0 | 45 | 0.0225 | 1.0 |
| 0.004 | 16.0 | 48 | 0.0167 | 1.0 |
| 0.003 | 17.0 | 51 | 0.0134 | 1.0 |
| 0.0027 | 18.0 | 54 | 0.0114 | 1.0 |
| 0.002 | 19.0 | 57 | 0.0104 | 1.0 |
| 0.0015 | 20.0 | 60 | 0.0099 | 1.0 |
| 0.0014 | 21.0 | 63 | 0.0095 | 1.0 |
| 0.0013 | 22.0 | 66 | 0.0095 | 1.0 |
| 0.0012 | 23.0 | 69 | 0.0091 | 1.0 |
| 0.0011 | 24.0 | 72 | 0.0085 | 1.0 |
| 0.0009 | 25.0 | 75 | 0.0081 | 1.0 |
| 0.001 | 26.0 | 78 | 0.0077 | 1.0 |
| 0.0008 | 27.0 | 81 | 0.0074 | 1.0 |
| 0.0009 | 28.0 | 84 | 0.0071 | 1.0 |
| 0.0007 | 29.0 | 87 | 0.0068 | 1.0 |
| 0.0008 | 30.0 | 90 | 0.0064 | 1.0 |
| 0.0007 | 31.0 | 93 | 0.0062 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0059 | 1.0 |
| 0.0007 | 33.0 | 99 | 0.0056 | 1.0 |
| 0.0005 | 34.0 | 102 | 0.0054 | 1.0 |
| 0.0006 | 35.0 | 105 | 0.0053 | 1.0 |
| 0.0008 | 36.0 | 108 | 0.0051 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0050 | 1.0 |
| 0.0007 | 38.0 | 114 | 0.0049 | 1.0 |
| 0.0006 | 39.0 | 117 | 0.0048 | 1.0 |
| 0.0005 | 40.0 | 120 | 0.0048 | 1.0 |
| 0.0005 | 41.0 | 123 | 0.0048 | 1.0 |
| 0.0005 | 42.0 | 126 | 0.0047 | 1.0 |
| 0.0005 | 43.0 | 129 | 0.0047 | 1.0 |
| 0.0005 | 44.0 | 132 | 0.0047 | 1.0 |
| 0.0006 | 45.0 | 135 | 0.0047 | 1.0 |
| 0.0005 | 46.0 | 138 | 0.0047 | 1.0 |
| 0.0005 | 47.0 | 141 | 0.0047 | 1.0 |
| 0.0006 | 48.0 | 144 | 0.0047 | 1.0 |
| 0.0005 | 49.0 | 147 | 0.0047 | 1.0 |
| 0.0005 | 50.0 | 150 | 0.0047 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-2
|
SetFit
| 2022-02-10T08:35:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-2
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6794
- Accuracy: 0.6063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6942 | 1.0 | 3 | 0.7940 | 0.25 |
| 0.6068 | 2.0 | 6 | 0.9326 | 0.25 |
| 0.6553 | 3.0 | 9 | 0.7979 | 0.25 |
| 0.475 | 4.0 | 12 | 0.7775 | 0.25 |
| 0.377 | 5.0 | 15 | 0.7477 | 0.25 |
| 0.3176 | 6.0 | 18 | 0.6856 | 0.75 |
| 0.2708 | 7.0 | 21 | 0.6554 | 0.75 |
| 0.2855 | 8.0 | 24 | 0.8129 | 0.5 |
| 0.148 | 9.0 | 27 | 0.7074 | 0.75 |
| 0.0947 | 10.0 | 30 | 0.7090 | 0.75 |
| 0.049 | 11.0 | 33 | 0.7885 | 0.75 |
| 0.0252 | 12.0 | 36 | 0.9203 | 0.75 |
| 0.0165 | 13.0 | 39 | 1.0937 | 0.75 |
| 0.0084 | 14.0 | 42 | 1.2502 | 0.75 |
| 0.0059 | 15.0 | 45 | 1.3726 | 0.75 |
| 0.0037 | 16.0 | 48 | 1.4784 | 0.75 |
| 0.003 | 17.0 | 51 | 1.5615 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-9
|
SetFit
| 2022-02-10T08:11:34Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7075
- Accuracy: 0.692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1054 | 1.0 | 19 | 1.0938 | 0.35 |
| 1.0338 | 2.0 | 38 | 1.0563 | 0.65 |
| 0.8622 | 3.0 | 57 | 0.9372 | 0.6 |
| 0.5919 | 4.0 | 76 | 0.8461 | 0.6 |
| 0.3357 | 5.0 | 95 | 1.0206 | 0.45 |
| 0.1621 | 6.0 | 114 | 0.9802 | 0.7 |
| 0.0637 | 7.0 | 133 | 1.2434 | 0.65 |
| 0.0261 | 8.0 | 152 | 1.3865 | 0.65 |
| 0.0156 | 9.0 | 171 | 1.4414 | 0.7 |
| 0.01 | 10.0 | 190 | 1.5502 | 0.7 |
| 0.0079 | 11.0 | 209 | 1.6102 | 0.7 |
| 0.0062 | 12.0 | 228 | 1.6525 | 0.7 |
| 0.0058 | 13.0 | 247 | 1.6884 | 0.7 |
| 0.0046 | 14.0 | 266 | 1.7479 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-8
|
SetFit
| 2022-02-10T08:10:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9191
- Accuracy: 0.632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1008 | 1.0 | 19 | 1.0877 | 0.4 |
| 1.0354 | 2.0 | 38 | 1.0593 | 0.35 |
| 0.8765 | 3.0 | 57 | 0.9722 | 0.5 |
| 0.6365 | 4.0 | 76 | 0.9271 | 0.55 |
| 0.3944 | 5.0 | 95 | 0.7852 | 0.5 |
| 0.2219 | 6.0 | 114 | 0.9360 | 0.55 |
| 0.126 | 7.0 | 133 | 1.0610 | 0.55 |
| 0.0389 | 8.0 | 152 | 1.0884 | 0.6 |
| 0.0191 | 9.0 | 171 | 1.3483 | 0.55 |
| 0.0108 | 10.0 | 190 | 1.4226 | 0.55 |
| 0.0082 | 11.0 | 209 | 1.4270 | 0.55 |
| 0.0065 | 12.0 | 228 | 1.5074 | 0.55 |
| 0.0059 | 13.0 | 247 | 1.5577 | 0.55 |
| 0.0044 | 14.0 | 266 | 1.5798 | 0.55 |
| 0.0042 | 15.0 | 285 | 1.6196 | 0.55 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-3
|
SetFit
| 2022-02-10T08:04:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8286
- Accuracy: 0.661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1041 | 1.0 | 19 | 1.0658 | 0.5 |
| 1.009 | 2.0 | 38 | 0.9892 | 0.7 |
| 0.7925 | 3.0 | 57 | 0.8516 | 0.7 |
| 0.5279 | 4.0 | 76 | 0.7877 | 0.65 |
| 0.2932 | 5.0 | 95 | 0.7592 | 0.65 |
| 0.1166 | 6.0 | 114 | 0.9437 | 0.65 |
| 0.044 | 7.0 | 133 | 1.0315 | 0.75 |
| 0.0197 | 8.0 | 152 | 1.3513 | 0.55 |
| 0.0126 | 9.0 | 171 | 1.1702 | 0.7 |
| 0.0083 | 10.0 | 190 | 1.2272 | 0.7 |
| 0.0068 | 11.0 | 209 | 1.2889 | 0.7 |
| 0.0059 | 12.0 | 228 | 1.3073 | 0.7 |
| 0.0052 | 13.0 | 247 | 1.3595 | 0.7 |
| 0.0041 | 14.0 | 266 | 1.4443 | 0.7 |
| 0.0038 | 15.0 | 285 | 1.4709 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-0
|
SetFit
| 2022-02-10T08:00:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7714
- Accuracy: 0.705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0871 | 1.0 | 19 | 1.0704 | 0.45 |
| 1.0019 | 2.0 | 38 | 1.0167 | 0.55 |
| 0.8412 | 3.0 | 57 | 0.9134 | 0.55 |
| 0.6047 | 4.0 | 76 | 0.8430 | 0.6 |
| 0.3746 | 5.0 | 95 | 0.8315 | 0.6 |
| 0.1885 | 6.0 | 114 | 0.8585 | 0.6 |
| 0.0772 | 7.0 | 133 | 0.9443 | 0.65 |
| 0.0312 | 8.0 | 152 | 1.1019 | 0.65 |
| 0.0161 | 9.0 | 171 | 1.1420 | 0.65 |
| 0.0102 | 10.0 | 190 | 1.2773 | 0.65 |
| 0.0077 | 11.0 | 209 | 1.2454 | 0.65 |
| 0.0064 | 12.0 | 228 | 1.2785 | 0.65 |
| 0.006 | 13.0 | 247 | 1.3834 | 0.65 |
| 0.0045 | 14.0 | 266 | 1.4139 | 0.65 |
| 0.0043 | 15.0 | 285 | 1.4056 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-8
|
SetFit
| 2022-02-10T07:58:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0704
- Accuracy: 0.394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1031 | 1.0 | 10 | 1.1286 | 0.1 |
| 1.0648 | 2.0 | 20 | 1.1157 | 0.3 |
| 0.9982 | 3.0 | 30 | 1.1412 | 0.2 |
| 0.9283 | 4.0 | 40 | 1.2053 | 0.2 |
| 0.7958 | 5.0 | 50 | 1.1466 | 0.2 |
| 0.6668 | 6.0 | 60 | 1.1783 | 0.3 |
| 0.5068 | 7.0 | 70 | 1.2992 | 0.3 |
| 0.3741 | 8.0 | 80 | 1.3483 | 0.3 |
| 0.1653 | 9.0 | 90 | 1.4533 | 0.2 |
| 0.0946 | 10.0 | 100 | 1.6292 | 0.2 |
| 0.0569 | 11.0 | 110 | 1.8381 | 0.2 |
| 0.0346 | 12.0 | 120 | 2.0781 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-5
|
SetFit
| 2022-02-10T07:54:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9907
- Accuracy: 0.49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 10 | 1.1287 | 0.2 |
| 1.0481 | 2.0 | 20 | 1.1136 | 0.2 |
| 0.9498 | 3.0 | 30 | 1.1200 | 0.2 |
| 0.8157 | 4.0 | 40 | 1.0771 | 0.2 |
| 0.65 | 5.0 | 50 | 0.9733 | 0.4 |
| 0.5021 | 6.0 | 60 | 1.0626 | 0.4 |
| 0.3358 | 7.0 | 70 | 1.0787 | 0.4 |
| 0.2017 | 8.0 | 80 | 1.3183 | 0.4 |
| 0.088 | 9.0 | 90 | 1.2204 | 0.5 |
| 0.0527 | 10.0 | 100 | 1.6892 | 0.4 |
| 0.0337 | 11.0 | 110 | 1.6967 | 0.5 |
| 0.0238 | 12.0 | 120 | 1.5436 | 0.5 |
| 0.0183 | 13.0 | 130 | 1.7447 | 0.4 |
| 0.0159 | 14.0 | 140 | 1.8999 | 0.4 |
| 0.014 | 15.0 | 150 | 1.9004 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-3
|
SetFit
| 2022-02-10T07:52:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0675
- Accuracy: 0.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0951 | 1.0 | 10 | 1.1346 | 0.1 |
| 1.0424 | 2.0 | 20 | 1.1120 | 0.2 |
| 0.957 | 3.0 | 30 | 1.1002 | 0.3 |
| 0.7889 | 4.0 | 40 | 1.0838 | 0.4 |
| 0.6162 | 5.0 | 50 | 1.0935 | 0.5 |
| 0.4849 | 6.0 | 60 | 1.0867 | 0.5 |
| 0.3089 | 7.0 | 70 | 1.1145 | 0.5 |
| 0.2145 | 8.0 | 80 | 1.1278 | 0.6 |
| 0.0805 | 9.0 | 90 | 1.2801 | 0.6 |
| 0.0497 | 10.0 | 100 | 1.3296 | 0.6 |
| 0.0328 | 11.0 | 110 | 1.2913 | 0.6 |
| 0.0229 | 12.0 | 120 | 1.3692 | 0.6 |
| 0.0186 | 13.0 | 130 | 1.4642 | 0.6 |
| 0.0161 | 14.0 | 140 | 1.5568 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-2
|
SetFit
| 2022-02-10T07:51:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9210
- Accuracy: 0.5635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0915 | 1.0 | 10 | 1.1051 | 0.4 |
| 1.0663 | 2.0 | 20 | 1.0794 | 0.3 |
| 1.0307 | 3.0 | 30 | 1.0664 | 0.5 |
| 0.9443 | 4.0 | 40 | 1.0729 | 0.5 |
| 0.8373 | 5.0 | 50 | 1.0175 | 0.4 |
| 0.6892 | 6.0 | 60 | 0.9624 | 0.5 |
| 0.538 | 7.0 | 70 | 0.9924 | 0.5 |
| 0.4173 | 8.0 | 80 | 1.0136 | 0.6 |
| 0.1846 | 9.0 | 90 | 1.0683 | 0.6 |
| 0.1125 | 10.0 | 100 | 1.2376 | 0.6 |
| 0.0754 | 11.0 | 110 | 1.2537 | 0.6 |
| 0.0401 | 12.0 | 120 | 1.4387 | 0.6 |
| 0.0285 | 13.0 | 130 | 1.5702 | 0.6 |
| 0.0241 | 14.0 | 140 | 1.6795 | 0.6 |
| 0.0175 | 15.0 | 150 | 1.7228 | 0.6 |
| 0.0147 | 16.0 | 160 | 1.7892 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
TeamFnord/manga-ocr
|
TeamFnord
| 2022-02-10T07:50:15Z | 20 | 11 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"ja",
"dataset:manga109s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-03-25T04:35:09Z |
---
language: ja
tags:
- image-to-text
license: apache-2.0
datasets:
- manga109s
---
# Manga OCR
Optical character recognition for Japanese text, with the main focus being Japanese manga.
It uses [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/visionencoderdecoder) framework.
Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality
text recognition, robust against various scenarios specific to manga:
- both vertical and horizontal text
- text with furigana
- text overlaid on images
- wide variety of fonts and font styles
- low quality images
Code is available [here](https://github.com/kha-white/manga_ocr).
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-0
|
SetFit
| 2022-02-10T07:49:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2707
- Accuracy: 0.517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0943 | 1.0 | 10 | 1.1095 | 0.3 |
| 1.0602 | 2.0 | 20 | 1.1086 | 0.4 |
| 1.0159 | 3.0 | 30 | 1.1165 | 0.4 |
| 0.9027 | 4.0 | 40 | 1.1377 | 0.4 |
| 0.8364 | 5.0 | 50 | 1.0126 | 0.5 |
| 0.6653 | 6.0 | 60 | 0.9298 | 0.5 |
| 0.535 | 7.0 | 70 | 0.9555 | 0.5 |
| 0.3713 | 8.0 | 80 | 0.8543 | 0.4 |
| 0.1633 | 9.0 | 90 | 0.9876 | 0.4 |
| 0.1069 | 10.0 | 100 | 0.8383 | 0.6 |
| 0.0591 | 11.0 | 110 | 0.8056 | 0.6 |
| 0.0344 | 12.0 | 120 | 0.8915 | 0.6 |
| 0.0265 | 13.0 | 130 | 0.8722 | 0.6 |
| 0.0196 | 14.0 | 140 | 1.0064 | 0.6 |
| 0.0158 | 15.0 | 150 | 1.0479 | 0.6 |
| 0.0128 | 16.0 | 160 | 1.0723 | 0.6 |
| 0.0121 | 17.0 | 170 | 1.0758 | 0.6 |
| 0.0093 | 18.0 | 180 | 1.1236 | 0.6 |
| 0.0085 | 19.0 | 190 | 1.1480 | 0.6 |
| 0.0084 | 20.0 | 200 | 1.1651 | 0.6 |
| 0.0077 | 21.0 | 210 | 1.1832 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-9
|
SetFit
| 2022-02-10T07:47:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0959
- Accuracy: 0.093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1068 | 1.0 | 5 | 1.1545 | 0.0 |
| 1.0494 | 2.0 | 10 | 1.1971 | 0.0 |
| 1.0612 | 3.0 | 15 | 1.2164 | 0.0 |
| 0.9517 | 4.0 | 20 | 1.2545 | 0.0 |
| 0.8874 | 5.0 | 25 | 1.2699 | 0.0 |
| 0.8598 | 6.0 | 30 | 1.2835 | 0.0 |
| 0.7006 | 7.0 | 35 | 1.3139 | 0.0 |
| 0.5969 | 8.0 | 40 | 1.3116 | 0.2 |
| 0.4769 | 9.0 | 45 | 1.3124 | 0.4 |
| 0.4352 | 10.0 | 50 | 1.3541 | 0.4 |
| 0.3231 | 11.0 | 55 | 1.3919 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-8
|
SetFit
| 2022-02-10T07:46:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0005
- Accuracy: 0.518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1029 | 1.0 | 5 | 1.1295 | 0.0 |
| 1.0472 | 2.0 | 10 | 1.1531 | 0.0 |
| 1.054 | 3.0 | 15 | 1.1475 | 0.0 |
| 0.9366 | 4.0 | 20 | 1.1515 | 0.0 |
| 0.8698 | 5.0 | 25 | 1.1236 | 0.4 |
| 0.8148 | 6.0 | 30 | 1.0716 | 0.6 |
| 0.6884 | 7.0 | 35 | 1.0662 | 0.6 |
| 0.5641 | 8.0 | 40 | 1.0671 | 0.6 |
| 0.5 | 9.0 | 45 | 1.0282 | 0.6 |
| 0.3882 | 10.0 | 50 | 1.0500 | 0.6 |
| 0.3522 | 11.0 | 55 | 1.1381 | 0.6 |
| 0.2492 | 12.0 | 60 | 1.1278 | 0.6 |
| 0.2063 | 13.0 | 65 | 1.0731 | 0.6 |
| 0.1608 | 14.0 | 70 | 1.1339 | 0.6 |
| 0.1448 | 15.0 | 75 | 1.1892 | 0.6 |
| 0.0925 | 16.0 | 80 | 1.1840 | 0.6 |
| 0.0768 | 17.0 | 85 | 1.0608 | 0.6 |
| 0.0585 | 18.0 | 90 | 1.1073 | 0.6 |
| 0.0592 | 19.0 | 95 | 1.3134 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-7
|
SetFit
| 2022-02-10T07:45:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1206
- Accuracy: 0.0555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1186 | 1.0 | 5 | 1.1631 | 0.0 |
| 1.058 | 2.0 | 10 | 1.1986 | 0.0 |
| 1.081 | 3.0 | 15 | 1.2111 | 0.0 |
| 1.0118 | 4.0 | 20 | 1.2373 | 0.0 |
| 0.9404 | 5.0 | 25 | 1.2645 | 0.0 |
| 0.9146 | 6.0 | 30 | 1.3258 | 0.0 |
| 0.8285 | 7.0 | 35 | 1.3789 | 0.0 |
| 0.6422 | 8.0 | 40 | 1.3783 | 0.0 |
| 0.6156 | 9.0 | 45 | 1.3691 | 0.0 |
| 0.5321 | 10.0 | 50 | 1.3693 | 0.0 |
| 0.4504 | 11.0 | 55 | 1.4000 | 0.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-6
|
SetFit
| 2022-02-10T07:45:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1275
- Accuracy: 0.3795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.11 | 1.0 | 5 | 1.1184 | 0.0 |
| 1.0608 | 2.0 | 10 | 1.1227 | 0.0 |
| 1.0484 | 3.0 | 15 | 1.1009 | 0.2 |
| 0.9614 | 4.0 | 20 | 1.1009 | 0.2 |
| 0.8545 | 5.0 | 25 | 1.0772 | 0.2 |
| 0.8241 | 6.0 | 30 | 1.0457 | 0.2 |
| 0.708 | 7.0 | 35 | 1.0301 | 0.4 |
| 0.5045 | 8.0 | 40 | 1.0325 | 0.4 |
| 0.4175 | 9.0 | 45 | 1.0051 | 0.4 |
| 0.3446 | 10.0 | 50 | 0.9610 | 0.4 |
| 0.2851 | 11.0 | 55 | 0.9954 | 0.4 |
| 0.1808 | 12.0 | 60 | 1.0561 | 0.4 |
| 0.1435 | 13.0 | 65 | 1.0218 | 0.4 |
| 0.1019 | 14.0 | 70 | 1.0254 | 0.4 |
| 0.0908 | 15.0 | 75 | 0.9935 | 0.4 |
| 0.0591 | 16.0 | 80 | 1.0090 | 0.4 |
| 0.0512 | 17.0 | 85 | 1.0884 | 0.4 |
| 0.0397 | 18.0 | 90 | 1.2732 | 0.4 |
| 0.039 | 19.0 | 95 | 1.2979 | 0.6 |
| 0.0325 | 20.0 | 100 | 1.2705 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-4
|
SetFit
| 2022-02-10T07:42:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1045
- Accuracy: 0.128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1115 | 1.0 | 5 | 1.1174 | 0.0 |
| 1.0518 | 2.0 | 10 | 1.1379 | 0.0 |
| 1.0445 | 3.0 | 15 | 1.1287 | 0.0 |
| 0.9306 | 4.0 | 20 | 1.1324 | 0.2 |
| 0.8242 | 5.0 | 25 | 1.1219 | 0.2 |
| 0.7986 | 6.0 | 30 | 1.1369 | 0.4 |
| 0.7369 | 7.0 | 35 | 1.1732 | 0.2 |
| 0.534 | 8.0 | 40 | 1.1828 | 0.6 |
| 0.4285 | 9.0 | 45 | 1.1482 | 0.6 |
| 0.3691 | 10.0 | 50 | 1.1401 | 0.6 |
| 0.3215 | 11.0 | 55 | 1.1286 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-3
|
SetFit
| 2022-02-10T07:42:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9681
- Accuracy: 0.549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1073 | 1.0 | 5 | 1.1393 | 0.0 |
| 1.0392 | 2.0 | 10 | 1.1729 | 0.0 |
| 1.0302 | 3.0 | 15 | 1.1694 | 0.2 |
| 0.9176 | 4.0 | 20 | 1.1846 | 0.2 |
| 0.8339 | 5.0 | 25 | 1.1663 | 0.2 |
| 0.7533 | 6.0 | 30 | 1.1513 | 0.4 |
| 0.6327 | 7.0 | 35 | 1.1474 | 0.4 |
| 0.4402 | 8.0 | 40 | 1.1385 | 0.4 |
| 0.3752 | 9.0 | 45 | 1.0965 | 0.2 |
| 0.3448 | 10.0 | 50 | 1.0357 | 0.2 |
| 0.2582 | 11.0 | 55 | 1.0438 | 0.2 |
| 0.1903 | 12.0 | 60 | 1.0561 | 0.2 |
| 0.1479 | 13.0 | 65 | 1.0569 | 0.2 |
| 0.1129 | 14.0 | 70 | 1.0455 | 0.2 |
| 0.1071 | 15.0 | 75 | 1.0416 | 0.4 |
| 0.0672 | 16.0 | 80 | 1.1164 | 0.4 |
| 0.0561 | 17.0 | 85 | 1.1846 | 0.6 |
| 0.0463 | 18.0 | 90 | 1.2040 | 0.6 |
| 0.0431 | 19.0 | 95 | 1.2078 | 0.6 |
| 0.0314 | 20.0 | 100 | 1.2368 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-1
|
SetFit
| 2022-02-10T07:40:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1013
- Accuracy: 0.0915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0866 | 1.0 | 5 | 1.1363 | 0.0 |
| 1.0439 | 2.0 | 10 | 1.1803 | 0.0 |
| 1.0227 | 3.0 | 15 | 1.2162 | 0.2 |
| 0.9111 | 4.0 | 20 | 1.2619 | 0.0 |
| 0.8243 | 5.0 | 25 | 1.2929 | 0.2 |
| 0.7488 | 6.0 | 30 | 1.3010 | 0.2 |
| 0.62 | 7.0 | 35 | 1.3011 | 0.2 |
| 0.5054 | 8.0 | 40 | 1.2931 | 0.4 |
| 0.4191 | 9.0 | 45 | 1.3274 | 0.4 |
| 0.4107 | 10.0 | 50 | 1.3259 | 0.4 |
| 0.3376 | 11.0 | 55 | 1.2800 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-0
|
SetFit
| 2022-02-10T07:39:26Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1097
- Accuracy: 0.132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1065 | 1.0 | 5 | 1.1287 | 0.0 |
| 1.0592 | 2.0 | 10 | 1.1729 | 0.0 |
| 1.0059 | 3.0 | 15 | 1.1959 | 0.0 |
| 0.9129 | 4.0 | 20 | 1.2410 | 0.0 |
| 0.8231 | 5.0 | 25 | 1.2820 | 0.0 |
| 0.7192 | 6.0 | 30 | 1.3361 | 0.0 |
| 0.6121 | 7.0 | 35 | 1.4176 | 0.0 |
| 0.5055 | 8.0 | 40 | 1.5111 | 0.0 |
| 0.4002 | 9.0 | 45 | 1.5572 | 0.0 |
| 0.3788 | 10.0 | 50 | 1.6733 | 0.0 |
| 0.2755 | 11.0 | 55 | 1.7381 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-9
|
SetFit
| 2022-02-10T07:36:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5625
- Accuracy: 0.7353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 13 | 0.6805 | 0.5385 |
| 0.6642 | 2.0 | 26 | 0.6526 | 0.7692 |
| 0.5869 | 3.0 | 39 | 0.5773 | 0.8462 |
| 0.4085 | 4.0 | 52 | 0.4959 | 0.8462 |
| 0.2181 | 5.0 | 65 | 0.4902 | 0.6923 |
| 0.069 | 6.0 | 78 | 0.5065 | 0.8462 |
| 0.0522 | 7.0 | 91 | 0.6082 | 0.7692 |
| 0.0135 | 8.0 | 104 | 0.6924 | 0.7692 |
| 0.0084 | 9.0 | 117 | 0.5921 | 0.7692 |
| 0.0061 | 10.0 | 130 | 0.6477 | 0.7692 |
| 0.0047 | 11.0 | 143 | 0.6648 | 0.7692 |
| 0.0035 | 12.0 | 156 | 0.6640 | 0.7692 |
| 0.0031 | 13.0 | 169 | 0.6615 | 0.7692 |
| 0.0029 | 14.0 | 182 | 0.6605 | 0.7692 |
| 0.0026 | 15.0 | 195 | 0.6538 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-6
|
SetFit
| 2022-02-10T07:33:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5072
- Accuracy: 0.7650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 13 | 0.6704 | 0.6923 |
| 0.6489 | 2.0 | 26 | 0.6228 | 0.8462 |
| 0.5475 | 3.0 | 39 | 0.5079 | 0.8462 |
| 0.4014 | 4.0 | 52 | 0.4203 | 0.8462 |
| 0.1923 | 5.0 | 65 | 0.3872 | 0.8462 |
| 0.1014 | 6.0 | 78 | 0.4909 | 0.8462 |
| 0.0349 | 7.0 | 91 | 0.5460 | 0.8462 |
| 0.0173 | 8.0 | 104 | 0.4867 | 0.8462 |
| 0.0098 | 9.0 | 117 | 0.5274 | 0.8462 |
| 0.0075 | 10.0 | 130 | 0.6086 | 0.8462 |
| 0.0057 | 11.0 | 143 | 0.6604 | 0.8462 |
| 0.0041 | 12.0 | 156 | 0.6904 | 0.8462 |
| 0.0037 | 13.0 | 169 | 0.7164 | 0.8462 |
| 0.0034 | 14.0 | 182 | 0.7368 | 0.8462 |
| 0.0031 | 15.0 | 195 | 0.7565 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-2
|
SetFit
| 2022-02-10T07:30:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4805
- Accuracy: 0.7699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7124 | 1.0 | 13 | 0.6882 | 0.5385 |
| 0.6502 | 2.0 | 26 | 0.6715 | 0.5385 |
| 0.6001 | 3.0 | 39 | 0.6342 | 0.6154 |
| 0.455 | 4.0 | 52 | 0.5713 | 0.7692 |
| 0.2605 | 5.0 | 65 | 0.5562 | 0.7692 |
| 0.1258 | 6.0 | 78 | 0.6799 | 0.7692 |
| 0.0444 | 7.0 | 91 | 0.8096 | 0.7692 |
| 0.0175 | 8.0 | 104 | 0.9281 | 0.6923 |
| 0.0106 | 9.0 | 117 | 0.9826 | 0.6923 |
| 0.0077 | 10.0 | 130 | 1.0254 | 0.7692 |
| 0.0056 | 11.0 | 143 | 1.0667 | 0.7692 |
| 0.0042 | 12.0 | 156 | 1.1003 | 0.7692 |
| 0.0036 | 13.0 | 169 | 1.1299 | 0.7692 |
| 0.0034 | 14.0 | 182 | 1.1623 | 0.6923 |
| 0.003 | 15.0 | 195 | 1.1938 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-8
|
SetFit
| 2022-02-10T07:26:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6895
- Accuracy: 0.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6899 | 1.0 | 7 | 0.7055 | 0.2857 |
| 0.6793 | 2.0 | 14 | 0.7205 | 0.2857 |
| 0.6291 | 3.0 | 21 | 0.7460 | 0.2857 |
| 0.5659 | 4.0 | 28 | 0.8041 | 0.2857 |
| 0.5607 | 5.0 | 35 | 0.7785 | 0.4286 |
| 0.3349 | 6.0 | 42 | 0.8163 | 0.4286 |
| 0.2436 | 7.0 | 49 | 0.9101 | 0.2857 |
| 0.1734 | 8.0 | 56 | 0.8632 | 0.5714 |
| 0.1122 | 9.0 | 63 | 0.9851 | 0.5714 |
| 0.0661 | 10.0 | 70 | 1.0835 | 0.5714 |
| 0.0407 | 11.0 | 77 | 1.1656 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-7
|
SetFit
| 2022-02-10T07:25:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6952
- Accuracy: 0.5025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6949 | 1.0 | 7 | 0.7252 | 0.2857 |
| 0.6678 | 2.0 | 14 | 0.7550 | 0.2857 |
| 0.6299 | 3.0 | 21 | 0.8004 | 0.2857 |
| 0.5596 | 4.0 | 28 | 0.8508 | 0.2857 |
| 0.5667 | 5.0 | 35 | 0.8464 | 0.2857 |
| 0.367 | 6.0 | 42 | 0.8515 | 0.2857 |
| 0.2706 | 7.0 | 49 | 0.9574 | 0.2857 |
| 0.2163 | 8.0 | 56 | 0.9710 | 0.4286 |
| 0.1024 | 9.0 | 63 | 1.1607 | 0.1429 |
| 0.1046 | 10.0 | 70 | 1.3779 | 0.1429 |
| 0.0483 | 11.0 | 77 | 1.4876 | 0.1429 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-6
|
SetFit
| 2022-02-10T07:24:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8356
- Accuracy: 0.6480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6978 | 1.0 | 7 | 0.6807 | 0.4286 |
| 0.6482 | 2.0 | 14 | 0.6775 | 0.4286 |
| 0.6051 | 3.0 | 21 | 0.6623 | 0.5714 |
| 0.486 | 4.0 | 28 | 0.6710 | 0.5714 |
| 0.4612 | 5.0 | 35 | 0.5325 | 0.7143 |
| 0.2233 | 6.0 | 42 | 0.4992 | 0.7143 |
| 0.1328 | 7.0 | 49 | 0.4753 | 0.7143 |
| 0.0905 | 8.0 | 56 | 0.2416 | 1.0 |
| 0.0413 | 9.0 | 63 | 0.2079 | 1.0 |
| 0.0356 | 10.0 | 70 | 0.2234 | 0.8571 |
| 0.0217 | 11.0 | 77 | 0.2639 | 0.8571 |
| 0.0121 | 12.0 | 84 | 0.2977 | 0.8571 |
| 0.0105 | 13.0 | 91 | 0.3468 | 0.8571 |
| 0.0085 | 14.0 | 98 | 0.3912 | 0.8571 |
| 0.0077 | 15.0 | 105 | 0.4000 | 0.8571 |
| 0.0071 | 16.0 | 112 | 0.4015 | 0.8571 |
| 0.0078 | 17.0 | 119 | 0.3865 | 0.8571 |
| 0.0059 | 18.0 | 126 | 0.3603 | 0.8571 |
| 0.0051 | 19.0 | 133 | 0.3231 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-5
|
SetFit
| 2022-02-10T07:23:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6537
- Accuracy: 0.6332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6925 | 1.0 | 7 | 0.6966 | 0.2857 |
| 0.6703 | 2.0 | 14 | 0.7045 | 0.2857 |
| 0.6404 | 3.0 | 21 | 0.7205 | 0.2857 |
| 0.555 | 4.0 | 28 | 0.7548 | 0.2857 |
| 0.5179 | 5.0 | 35 | 0.6745 | 0.5714 |
| 0.3038 | 6.0 | 42 | 0.7260 | 0.5714 |
| 0.2089 | 7.0 | 49 | 0.8016 | 0.5714 |
| 0.1303 | 8.0 | 56 | 0.8202 | 0.5714 |
| 0.0899 | 9.0 | 63 | 0.9966 | 0.5714 |
| 0.0552 | 10.0 | 70 | 1.1887 | 0.5714 |
| 0.0333 | 11.0 | 77 | 1.2163 | 0.5714 |
| 0.0169 | 12.0 | 84 | 1.2874 | 0.5714 |
| 0.0136 | 13.0 | 91 | 1.3598 | 0.5714 |
| 0.0103 | 14.0 | 98 | 1.4237 | 0.5714 |
| 0.0089 | 15.0 | 105 | 1.4758 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-4
|
SetFit
| 2022-02-10T07:22:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1501
- Accuracy: 0.6387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7043 | 1.0 | 7 | 0.7139 | 0.2857 |
| 0.68 | 2.0 | 14 | 0.7398 | 0.2857 |
| 0.641 | 3.0 | 21 | 0.7723 | 0.2857 |
| 0.5424 | 4.0 | 28 | 0.8391 | 0.2857 |
| 0.5988 | 5.0 | 35 | 0.7761 | 0.2857 |
| 0.3698 | 6.0 | 42 | 0.7707 | 0.4286 |
| 0.3204 | 7.0 | 49 | 0.8290 | 0.4286 |
| 0.2882 | 8.0 | 56 | 0.6551 | 0.5714 |
| 0.1512 | 9.0 | 63 | 0.5652 | 0.5714 |
| 0.1302 | 10.0 | 70 | 0.5278 | 0.5714 |
| 0.1043 | 11.0 | 77 | 0.4987 | 0.7143 |
| 0.0272 | 12.0 | 84 | 0.5278 | 0.5714 |
| 0.0201 | 13.0 | 91 | 0.5307 | 0.5714 |
| 0.0129 | 14.0 | 98 | 0.5382 | 0.5714 |
| 0.0117 | 15.0 | 105 | 0.5227 | 0.5714 |
| 0.0094 | 16.0 | 112 | 0.5066 | 0.7143 |
| 0.0104 | 17.0 | 119 | 0.4869 | 0.7143 |
| 0.0069 | 18.0 | 126 | 0.4786 | 0.7143 |
| 0.0062 | 19.0 | 133 | 0.4707 | 0.7143 |
| 0.0065 | 20.0 | 140 | 0.4669 | 0.7143 |
| 0.0051 | 21.0 | 147 | 0.4686 | 0.7143 |
| 0.0049 | 22.0 | 154 | 0.4784 | 0.7143 |
| 0.0046 | 23.0 | 161 | 0.4839 | 0.7143 |
| 0.0039 | 24.0 | 168 | 0.4823 | 0.7143 |
| 0.0044 | 25.0 | 175 | 0.4791 | 0.7143 |
| 0.0037 | 26.0 | 182 | 0.4778 | 0.7143 |
| 0.0038 | 27.0 | 189 | 0.4770 | 0.7143 |
| 0.0036 | 28.0 | 196 | 0.4750 | 0.7143 |
| 0.0031 | 29.0 | 203 | 0.4766 | 0.7143 |
| 0.0031 | 30.0 | 210 | 0.4754 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-2
|
SetFit
| 2022-02-10T07:20:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.6315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7043 | 1.0 | 7 | 0.7054 | 0.2857 |
| 0.6711 | 2.0 | 14 | 0.7208 | 0.2857 |
| 0.6311 | 3.0 | 21 | 0.7365 | 0.2857 |
| 0.551 | 4.0 | 28 | 0.7657 | 0.5714 |
| 0.5599 | 5.0 | 35 | 0.6915 | 0.5714 |
| 0.3167 | 6.0 | 42 | 0.7134 | 0.5714 |
| 0.2489 | 7.0 | 49 | 0.7892 | 0.5714 |
| 0.1985 | 8.0 | 56 | 0.6756 | 0.7143 |
| 0.0864 | 9.0 | 63 | 0.8059 | 0.5714 |
| 0.0903 | 10.0 | 70 | 0.8165 | 0.7143 |
| 0.0429 | 11.0 | 77 | 0.7947 | 0.7143 |
| 0.0186 | 12.0 | 84 | 0.8570 | 0.7143 |
| 0.0146 | 13.0 | 91 | 0.9346 | 0.7143 |
| 0.011 | 14.0 | 98 | 0.9804 | 0.7143 |
| 0.0098 | 15.0 | 105 | 1.0136 | 0.7143 |
| 0.0086 | 16.0 | 112 | 1.0424 | 0.7143 |
| 0.0089 | 17.0 | 119 | 1.0736 | 0.7143 |
| 0.0068 | 18.0 | 126 | 1.0808 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-0
|
SetFit
| 2022-02-10T07:18:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6903
- Accuracy: 0.5091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6934 | 1.0 | 7 | 0.7142 | 0.2857 |
| 0.6703 | 2.0 | 14 | 0.7379 | 0.2857 |
| 0.6282 | 3.0 | 21 | 0.7769 | 0.2857 |
| 0.5193 | 4.0 | 28 | 0.8799 | 0.2857 |
| 0.5104 | 5.0 | 35 | 0.8380 | 0.4286 |
| 0.2504 | 6.0 | 42 | 0.8622 | 0.4286 |
| 0.1794 | 7.0 | 49 | 0.9227 | 0.4286 |
| 0.1156 | 8.0 | 56 | 0.8479 | 0.4286 |
| 0.0709 | 9.0 | 63 | 1.0929 | 0.2857 |
| 0.0471 | 10.0 | 70 | 1.2189 | 0.2857 |
| 0.0288 | 11.0 | 77 | 1.2026 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-8
|
SetFit
| 2022-02-10T07:16:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7061 | 1.0 | 3 | 0.6899 | 0.75 |
| 0.6627 | 2.0 | 6 | 0.7026 | 0.25 |
| 0.644 | 3.0 | 9 | 0.7158 | 0.25 |
| 0.6087 | 4.0 | 12 | 0.7325 | 0.25 |
| 0.5602 | 5.0 | 15 | 0.7555 | 0.25 |
| 0.5034 | 6.0 | 18 | 0.7725 | 0.25 |
| 0.4672 | 7.0 | 21 | 0.7983 | 0.25 |
| 0.403 | 8.0 | 24 | 0.8314 | 0.25 |
| 0.3571 | 9.0 | 27 | 0.8555 | 0.25 |
| 0.2792 | 10.0 | 30 | 0.9065 | 0.25 |
| 0.2373 | 11.0 | 33 | 0.9286 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-7
|
SetFit
| 2022-02-10T07:15:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6950
- Accuracy: 0.4618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7156 | 1.0 | 3 | 0.6965 | 0.25 |
| 0.6645 | 2.0 | 6 | 0.7059 | 0.25 |
| 0.6368 | 3.0 | 9 | 0.7179 | 0.25 |
| 0.5944 | 4.0 | 12 | 0.7408 | 0.25 |
| 0.5369 | 5.0 | 15 | 0.7758 | 0.25 |
| 0.449 | 6.0 | 18 | 0.8009 | 0.25 |
| 0.4352 | 7.0 | 21 | 0.8209 | 0.5 |
| 0.3462 | 8.0 | 24 | 0.8470 | 0.5 |
| 0.3028 | 9.0 | 27 | 0.8579 | 0.5 |
| 0.2365 | 10.0 | 30 | 0.8704 | 0.5 |
| 0.2023 | 11.0 | 33 | 0.8770 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-6
|
SetFit
| 2022-02-10T07:14:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5336
- Accuracy: 0.7523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7161 | 1.0 | 3 | 0.6941 | 0.5 |
| 0.6786 | 2.0 | 6 | 0.7039 | 0.25 |
| 0.6586 | 3.0 | 9 | 0.7090 | 0.25 |
| 0.6121 | 4.0 | 12 | 0.7183 | 0.25 |
| 0.5696 | 5.0 | 15 | 0.7266 | 0.25 |
| 0.522 | 6.0 | 18 | 0.7305 | 0.25 |
| 0.4899 | 7.0 | 21 | 0.7339 | 0.25 |
| 0.3985 | 8.0 | 24 | 0.7429 | 0.25 |
| 0.3758 | 9.0 | 27 | 0.7224 | 0.25 |
| 0.2876 | 10.0 | 30 | 0.7068 | 0.5 |
| 0.2498 | 11.0 | 33 | 0.6751 | 0.75 |
| 0.1921 | 12.0 | 36 | 0.6487 | 0.75 |
| 0.1491 | 13.0 | 39 | 0.6261 | 0.75 |
| 0.1276 | 14.0 | 42 | 0.6102 | 0.75 |
| 0.0996 | 15.0 | 45 | 0.5964 | 0.75 |
| 0.073 | 16.0 | 48 | 0.6019 | 0.75 |
| 0.0627 | 17.0 | 51 | 0.5933 | 0.75 |
| 0.053 | 18.0 | 54 | 0.5768 | 0.75 |
| 0.0403 | 19.0 | 57 | 0.5698 | 0.75 |
| 0.0328 | 20.0 | 60 | 0.5656 | 0.75 |
| 0.03 | 21.0 | 63 | 0.5634 | 0.75 |
| 0.025 | 22.0 | 66 | 0.5620 | 0.75 |
| 0.0209 | 23.0 | 69 | 0.5623 | 0.75 |
| 0.0214 | 24.0 | 72 | 0.5606 | 0.75 |
| 0.0191 | 25.0 | 75 | 0.5565 | 0.75 |
| 0.0173 | 26.0 | 78 | 0.5485 | 0.75 |
| 0.0175 | 27.0 | 81 | 0.5397 | 0.75 |
| 0.0132 | 28.0 | 84 | 0.5322 | 0.75 |
| 0.0138 | 29.0 | 87 | 0.5241 | 0.75 |
| 0.0128 | 30.0 | 90 | 0.5235 | 0.75 |
| 0.0126 | 31.0 | 93 | 0.5253 | 0.75 |
| 0.012 | 32.0 | 96 | 0.5317 | 0.75 |
| 0.0118 | 33.0 | 99 | 0.5342 | 0.75 |
| 0.0092 | 34.0 | 102 | 0.5388 | 0.75 |
| 0.0117 | 35.0 | 105 | 0.5414 | 0.75 |
| 0.0124 | 36.0 | 108 | 0.5453 | 0.75 |
| 0.0109 | 37.0 | 111 | 0.5506 | 0.75 |
| 0.0112 | 38.0 | 114 | 0.5555 | 0.75 |
| 0.0087 | 39.0 | 117 | 0.5597 | 0.75 |
| 0.01 | 40.0 | 120 | 0.5640 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-5
|
SetFit
| 2022-02-10T07:13:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8419
- Accuracy: 0.6172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 3 | 0.6848 | 0.75 |
| 0.6681 | 2.0 | 6 | 0.6875 | 0.5 |
| 0.6591 | 3.0 | 9 | 0.6868 | 0.25 |
| 0.6052 | 4.0 | 12 | 0.6943 | 0.25 |
| 0.557 | 5.0 | 15 | 0.7078 | 0.25 |
| 0.4954 | 6.0 | 18 | 0.7168 | 0.25 |
| 0.4593 | 7.0 | 21 | 0.7185 | 0.25 |
| 0.3936 | 8.0 | 24 | 0.7212 | 0.25 |
| 0.3699 | 9.0 | 27 | 0.6971 | 0.5 |
| 0.2916 | 10.0 | 30 | 0.6827 | 0.5 |
| 0.2511 | 11.0 | 33 | 0.6464 | 0.5 |
| 0.2109 | 12.0 | 36 | 0.6344 | 0.75 |
| 0.1655 | 13.0 | 39 | 0.6377 | 0.75 |
| 0.1412 | 14.0 | 42 | 0.6398 | 0.75 |
| 0.1157 | 15.0 | 45 | 0.6315 | 0.75 |
| 0.0895 | 16.0 | 48 | 0.6210 | 0.75 |
| 0.0783 | 17.0 | 51 | 0.5918 | 0.75 |
| 0.0606 | 18.0 | 54 | 0.5543 | 0.75 |
| 0.0486 | 19.0 | 57 | 0.5167 | 0.75 |
| 0.0405 | 20.0 | 60 | 0.4862 | 0.75 |
| 0.0376 | 21.0 | 63 | 0.4644 | 0.75 |
| 0.0294 | 22.0 | 66 | 0.4497 | 0.75 |
| 0.0261 | 23.0 | 69 | 0.4428 | 0.75 |
| 0.0238 | 24.0 | 72 | 0.4408 | 0.75 |
| 0.0217 | 25.0 | 75 | 0.4392 | 0.75 |
| 0.0187 | 26.0 | 78 | 0.4373 | 0.75 |
| 0.0177 | 27.0 | 81 | 0.4360 | 0.75 |
| 0.0136 | 28.0 | 84 | 0.4372 | 0.75 |
| 0.0144 | 29.0 | 87 | 0.4368 | 0.75 |
| 0.014 | 30.0 | 90 | 0.4380 | 0.75 |
| 0.0137 | 31.0 | 93 | 0.4383 | 0.75 |
| 0.0133 | 32.0 | 96 | 0.4409 | 0.75 |
| 0.013 | 33.0 | 99 | 0.4380 | 0.75 |
| 0.0096 | 34.0 | 102 | 0.4358 | 0.75 |
| 0.012 | 35.0 | 105 | 0.4339 | 0.75 |
| 0.0122 | 36.0 | 108 | 0.4305 | 0.75 |
| 0.0109 | 37.0 | 111 | 0.4267 | 0.75 |
| 0.0121 | 38.0 | 114 | 0.4231 | 0.75 |
| 0.0093 | 39.0 | 117 | 0.4209 | 0.75 |
| 0.0099 | 40.0 | 120 | 0.4199 | 0.75 |
| 0.0091 | 41.0 | 123 | 0.4184 | 0.75 |
| 0.0116 | 42.0 | 126 | 0.4173 | 0.75 |
| 0.01 | 43.0 | 129 | 0.4163 | 0.75 |
| 0.0098 | 44.0 | 132 | 0.4153 | 0.75 |
| 0.0101 | 45.0 | 135 | 0.4155 | 0.75 |
| 0.0088 | 46.0 | 138 | 0.4149 | 0.75 |
| 0.0087 | 47.0 | 141 | 0.4150 | 0.75 |
| 0.0093 | 48.0 | 144 | 0.4147 | 0.75 |
| 0.0081 | 49.0 | 147 | 0.4147 | 0.75 |
| 0.009 | 50.0 | 150 | 0.4150 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-2
|
SetFit
| 2022-02-10T07:10:08Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7081 | 1.0 | 3 | 0.7031 | 0.25 |
| 0.6853 | 2.0 | 6 | 0.7109 | 0.25 |
| 0.6696 | 3.0 | 9 | 0.7211 | 0.25 |
| 0.6174 | 4.0 | 12 | 0.7407 | 0.25 |
| 0.5717 | 5.0 | 15 | 0.7625 | 0.25 |
| 0.5096 | 6.0 | 18 | 0.7732 | 0.25 |
| 0.488 | 7.0 | 21 | 0.7798 | 0.25 |
| 0.4023 | 8.0 | 24 | 0.7981 | 0.25 |
| 0.3556 | 9.0 | 27 | 0.8110 | 0.25 |
| 0.2714 | 10.0 | 30 | 0.8269 | 0.25 |
| 0.2295 | 11.0 | 33 | 0.8276 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-0
|
SetFit
| 2022-02-10T07:08:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6920
- Accuracy: 0.5189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6916 | 1.0 | 3 | 0.7035 | 0.25 |
| 0.6852 | 2.0 | 6 | 0.7139 | 0.25 |
| 0.6533 | 3.0 | 9 | 0.7192 | 0.25 |
| 0.6211 | 4.0 | 12 | 0.7322 | 0.25 |
| 0.5522 | 5.0 | 15 | 0.7561 | 0.25 |
| 0.488 | 6.0 | 18 | 0.7883 | 0.25 |
| 0.48 | 7.0 | 21 | 0.8224 | 0.25 |
| 0.3948 | 8.0 | 24 | 0.8605 | 0.25 |
| 0.3478 | 9.0 | 27 | 0.8726 | 0.25 |
| 0.2723 | 10.0 | 30 | 0.8885 | 0.25 |
| 0.2174 | 11.0 | 33 | 0.8984 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
bitmorse/kickstarter-distilbert-model
|
bitmorse
| 2022-02-10T06:31:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: kickstarter-distilbert-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kickstarter-distilbert-model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.2
- Tokenizers 0.11.0
|
speech-seq2seq/wav2vec2-2-roberta-large
|
speech-seq2seq
| 2022-02-10T06:14:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 12.2365
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 6.5774 | 0.28 | 500 | 10.5449 | 1.0 |
| 6.706 | 0.56 | 1000 | 9.4411 | 1.0 |
| 6.9182 | 0.84 | 1500 | 10.9554 | 1.0 |
| 6.7416 | 1.12 | 2000 | 10.0801 | 1.0 |
| 6.8778 | 1.4 | 2500 | 9.8569 | 1.0 |
| 6.7694 | 1.68 | 3000 | 10.4234 | 1.0 |
| 6.7415 | 1.96 | 3500 | 10.6545 | 1.0 |
| 6.5997 | 2.24 | 4000 | 10.4268 | 1.0 |
| 6.7672 | 2.52 | 4500 | 11.1929 | 1.0 |
| 6.5254 | 2.8 | 5000 | 12.2365 | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
philippelaban/summary_loop10
|
philippelaban
| 2022-02-09T22:02:12Z | 15 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
---
# Try out in the Hosted inference API
In the right panel, you can try to the model (although it only handles a short sequence length).
Enter the document you want to summarize in the panel on the right.
# Model Loading
The model (based on a GPT2 base architecture) can be loaded in the following way:
```
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
model = GPT2LMHeadModel.from_pretrained("philippelaban/summary_loop10")
tokenizer = GPT2TokenizerFast.from_pretrained("philippelaban/summary_loop10")
```
# Example Use
```
document = "Bouncing Boulders Point to Quakes on Mars. A preponderance of boulder tracks on the red planet may be evidence of recent seismic activity. If a rock falls on Mars, and no one is there to see it, does it leave a trace? Yes, and it's a beautiful herringbone-like pattern, new research reveals. Scientists have now spotted thousands of tracks on the red planet created by tumbling boulders. Delicate chevron-shaped piles of Martian dust and sand frame the tracks, the team showed, and most fade over the course of a few years. Rockfalls have been spotted elsewhere in the solar system, including on the moon and even a comet. But a big open question is the timing of these processes on other worlds — are they ongoing or did they predominantly occur in the past?"
tokenized_document = tokenizer([document], max_length=300, truncation=True, return_tensors="pt")["input_ids"].cuda()
input_shape = tokenized_document.shape
outputs = model.generate(tokenized_document, do_sample=False, max_length=500, num_beams=4, num_return_sequences=4, no_repeat_ngram_size=6, return_dict_in_generate=True, output_scores=True)
candidate_sequences = outputs.sequences[:, input_shape[1]:] # Remove the encoded text, keep only the summary
candidate_scores = outputs.sequences_scores.tolist()
for candidate_tokens, score in zip(candidate_sequences, candidate_scores):
summary = tokenizer.decode(candidate_tokens)
print("[Score: %.3f] %s" % (score, summary[:summary.index("END")]))
```
# Example output
```
[Score: -0.084] Here's what you need to know about rockfalls
[Score: -0.087] Here's what you need to know about these tracks
[Score: -0.091] Here's what we know so far about these tracks
[Score: -0.101] Here's what you need to know about rockfall
```
# Github repo
You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/CannyLab/summary_loop
|
SetFit/distilbert-base-uncased__subj__train-8-8
|
SetFit
| 2022-02-09T20:32:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3160
- Accuracy: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7187 | 1.0 | 3 | 0.6776 | 1.0 |
| 0.684 | 2.0 | 6 | 0.6608 | 1.0 |
| 0.6532 | 3.0 | 9 | 0.6364 | 1.0 |
| 0.5996 | 4.0 | 12 | 0.6119 | 1.0 |
| 0.5242 | 5.0 | 15 | 0.5806 | 1.0 |
| 0.4612 | 6.0 | 18 | 0.5320 | 1.0 |
| 0.4192 | 7.0 | 21 | 0.4714 | 1.0 |
| 0.3274 | 8.0 | 24 | 0.4071 | 1.0 |
| 0.2871 | 9.0 | 27 | 0.3378 | 1.0 |
| 0.2082 | 10.0 | 30 | 0.2822 | 1.0 |
| 0.1692 | 11.0 | 33 | 0.2271 | 1.0 |
| 0.1242 | 12.0 | 36 | 0.1793 | 1.0 |
| 0.0977 | 13.0 | 39 | 0.1417 | 1.0 |
| 0.0776 | 14.0 | 42 | 0.1117 | 1.0 |
| 0.0631 | 15.0 | 45 | 0.0894 | 1.0 |
| 0.0453 | 16.0 | 48 | 0.0733 | 1.0 |
| 0.0399 | 17.0 | 51 | 0.0617 | 1.0 |
| 0.0333 | 18.0 | 54 | 0.0528 | 1.0 |
| 0.0266 | 19.0 | 57 | 0.0454 | 1.0 |
| 0.0234 | 20.0 | 60 | 0.0393 | 1.0 |
| 0.0223 | 21.0 | 63 | 0.0345 | 1.0 |
| 0.0195 | 22.0 | 66 | 0.0309 | 1.0 |
| 0.0161 | 23.0 | 69 | 0.0281 | 1.0 |
| 0.0167 | 24.0 | 72 | 0.0260 | 1.0 |
| 0.0163 | 25.0 | 75 | 0.0242 | 1.0 |
| 0.0134 | 26.0 | 78 | 0.0227 | 1.0 |
| 0.0128 | 27.0 | 81 | 0.0214 | 1.0 |
| 0.0101 | 28.0 | 84 | 0.0204 | 1.0 |
| 0.0109 | 29.0 | 87 | 0.0194 | 1.0 |
| 0.0112 | 30.0 | 90 | 0.0186 | 1.0 |
| 0.0108 | 31.0 | 93 | 0.0179 | 1.0 |
| 0.011 | 32.0 | 96 | 0.0174 | 1.0 |
| 0.0099 | 33.0 | 99 | 0.0169 | 1.0 |
| 0.0083 | 34.0 | 102 | 0.0164 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0160 | 1.0 |
| 0.01 | 36.0 | 108 | 0.0156 | 1.0 |
| 0.0084 | 37.0 | 111 | 0.0152 | 1.0 |
| 0.0089 | 38.0 | 114 | 0.0149 | 1.0 |
| 0.0073 | 39.0 | 117 | 0.0146 | 1.0 |
| 0.0082 | 40.0 | 120 | 0.0143 | 1.0 |
| 0.008 | 41.0 | 123 | 0.0141 | 1.0 |
| 0.0093 | 42.0 | 126 | 0.0139 | 1.0 |
| 0.0078 | 43.0 | 129 | 0.0138 | 1.0 |
| 0.0086 | 44.0 | 132 | 0.0136 | 1.0 |
| 0.009 | 45.0 | 135 | 0.0135 | 1.0 |
| 0.0072 | 46.0 | 138 | 0.0134 | 1.0 |
| 0.0075 | 47.0 | 141 | 0.0133 | 1.0 |
| 0.0082 | 48.0 | 144 | 0.0133 | 1.0 |
| 0.0068 | 49.0 | 147 | 0.0132 | 1.0 |
| 0.0074 | 50.0 | 150 | 0.0132 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-7
|
SetFit
| 2022-02-09T20:30:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
- Accuracy: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7044 | 1.0 | 3 | 0.6909 | 0.5 |
| 0.6678 | 2.0 | 6 | 0.6901 | 0.5 |
| 0.6336 | 3.0 | 9 | 0.6807 | 0.5 |
| 0.5926 | 4.0 | 12 | 0.6726 | 0.5 |
| 0.5221 | 5.0 | 15 | 0.6648 | 0.5 |
| 0.4573 | 6.0 | 18 | 0.6470 | 0.5 |
| 0.4177 | 7.0 | 21 | 0.6251 | 0.5 |
| 0.3252 | 8.0 | 24 | 0.5994 | 0.5 |
| 0.2831 | 9.0 | 27 | 0.5529 | 0.5 |
| 0.213 | 10.0 | 30 | 0.5078 | 0.75 |
| 0.1808 | 11.0 | 33 | 0.4521 | 1.0 |
| 0.1355 | 12.0 | 36 | 0.3996 | 1.0 |
| 0.1027 | 13.0 | 39 | 0.3557 | 1.0 |
| 0.0862 | 14.0 | 42 | 0.3121 | 1.0 |
| 0.0682 | 15.0 | 45 | 0.2828 | 1.0 |
| 0.0517 | 16.0 | 48 | 0.2603 | 1.0 |
| 0.0466 | 17.0 | 51 | 0.2412 | 1.0 |
| 0.038 | 18.0 | 54 | 0.2241 | 1.0 |
| 0.0276 | 19.0 | 57 | 0.2096 | 1.0 |
| 0.0246 | 20.0 | 60 | 0.1969 | 1.0 |
| 0.0249 | 21.0 | 63 | 0.1859 | 1.0 |
| 0.0201 | 22.0 | 66 | 0.1770 | 1.0 |
| 0.018 | 23.0 | 69 | 0.1703 | 1.0 |
| 0.0164 | 24.0 | 72 | 0.1670 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.1639 | 1.0 |
| 0.0135 | 26.0 | 78 | 0.1604 | 1.0 |
| 0.014 | 27.0 | 81 | 0.1585 | 1.0 |
| 0.0108 | 28.0 | 84 | 0.1569 | 1.0 |
| 0.0116 | 29.0 | 87 | 0.1549 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.1532 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.1513 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1503 | 1.0 |
| 0.01 | 33.0 | 99 | 0.1490 | 1.0 |
| 0.0079 | 34.0 | 102 | 0.1479 | 1.0 |
| 0.0097 | 35.0 | 105 | 0.1466 | 1.0 |
| 0.0112 | 36.0 | 108 | 0.1458 | 1.0 |
| 0.0091 | 37.0 | 111 | 0.1457 | 1.0 |
| 0.0098 | 38.0 | 114 | 0.1454 | 1.0 |
| 0.0076 | 39.0 | 117 | 0.1451 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1448 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1445 | 1.0 |
| 0.0096 | 42.0 | 126 | 0.1440 | 1.0 |
| 0.0081 | 43.0 | 129 | 0.1430 | 1.0 |
| 0.0083 | 44.0 | 132 | 0.1424 | 1.0 |
| 0.0088 | 45.0 | 135 | 0.1418 | 1.0 |
| 0.0077 | 46.0 | 138 | 0.1414 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1413 | 1.0 |
| 0.0084 | 48.0 | 144 | 0.1412 | 1.0 |
| 0.0072 | 49.0 | 147 | 0.1411 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1411 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-5
|
SetFit
| 2022-02-09T20:26:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6927
- Accuracy: 0.506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7102 | 1.0 | 3 | 0.6790 | 0.75 |
| 0.6693 | 2.0 | 6 | 0.6831 | 0.75 |
| 0.6438 | 3.0 | 9 | 0.6876 | 0.75 |
| 0.6047 | 4.0 | 12 | 0.6970 | 0.75 |
| 0.547 | 5.0 | 15 | 0.7065 | 0.75 |
| 0.4885 | 6.0 | 18 | 0.7114 | 0.75 |
| 0.4601 | 7.0 | 21 | 0.7147 | 0.5 |
| 0.4017 | 8.0 | 24 | 0.7178 | 0.5 |
| 0.3474 | 9.0 | 27 | 0.7145 | 0.5 |
| 0.2624 | 10.0 | 30 | 0.7153 | 0.5 |
| 0.2175 | 11.0 | 33 | 0.7158 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-4
|
SetFit
| 2022-02-09T20:25:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3305
- Accuracy: 0.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6991 | 1.0 | 3 | 0.6772 | 0.75 |
| 0.6707 | 2.0 | 6 | 0.6704 | 0.75 |
| 0.6402 | 3.0 | 9 | 0.6608 | 1.0 |
| 0.5789 | 4.0 | 12 | 0.6547 | 0.75 |
| 0.5211 | 5.0 | 15 | 0.6434 | 0.75 |
| 0.454 | 6.0 | 18 | 0.6102 | 1.0 |
| 0.4187 | 7.0 | 21 | 0.5701 | 1.0 |
| 0.3401 | 8.0 | 24 | 0.5289 | 1.0 |
| 0.3107 | 9.0 | 27 | 0.4737 | 1.0 |
| 0.2381 | 10.0 | 30 | 0.4255 | 1.0 |
| 0.1982 | 11.0 | 33 | 0.3685 | 1.0 |
| 0.1631 | 12.0 | 36 | 0.3200 | 1.0 |
| 0.1234 | 13.0 | 39 | 0.2798 | 1.0 |
| 0.0993 | 14.0 | 42 | 0.2455 | 1.0 |
| 0.0781 | 15.0 | 45 | 0.2135 | 1.0 |
| 0.0586 | 16.0 | 48 | 0.1891 | 1.0 |
| 0.0513 | 17.0 | 51 | 0.1671 | 1.0 |
| 0.043 | 18.0 | 54 | 0.1427 | 1.0 |
| 0.0307 | 19.0 | 57 | 0.1225 | 1.0 |
| 0.0273 | 20.0 | 60 | 0.1060 | 1.0 |
| 0.0266 | 21.0 | 63 | 0.0920 | 1.0 |
| 0.0233 | 22.0 | 66 | 0.0823 | 1.0 |
| 0.0185 | 23.0 | 69 | 0.0751 | 1.0 |
| 0.0173 | 24.0 | 72 | 0.0698 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.0651 | 1.0 |
| 0.0142 | 26.0 | 78 | 0.0613 | 1.0 |
| 0.0151 | 27.0 | 81 | 0.0583 | 1.0 |
| 0.0117 | 28.0 | 84 | 0.0563 | 1.0 |
| 0.0123 | 29.0 | 87 | 0.0546 | 1.0 |
| 0.0121 | 30.0 | 90 | 0.0531 | 1.0 |
| 0.0123 | 31.0 | 93 | 0.0511 | 1.0 |
| 0.0112 | 32.0 | 96 | 0.0496 | 1.0 |
| 0.0103 | 33.0 | 99 | 0.0481 | 1.0 |
| 0.0086 | 34.0 | 102 | 0.0468 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0457 | 1.0 |
| 0.0107 | 36.0 | 108 | 0.0447 | 1.0 |
| 0.0095 | 37.0 | 111 | 0.0439 | 1.0 |
| 0.0102 | 38.0 | 114 | 0.0429 | 1.0 |
| 0.0077 | 39.0 | 117 | 0.0422 | 1.0 |
| 0.0092 | 40.0 | 120 | 0.0415 | 1.0 |
| 0.0083 | 41.0 | 123 | 0.0409 | 1.0 |
| 0.0094 | 42.0 | 126 | 0.0404 | 1.0 |
| 0.0084 | 43.0 | 129 | 0.0400 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.0396 | 1.0 |
| 0.0092 | 45.0 | 135 | 0.0392 | 1.0 |
| 0.0076 | 46.0 | 138 | 0.0389 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.0388 | 1.0 |
| 0.0085 | 48.0 | 144 | 0.0387 | 1.0 |
| 0.0071 | 49.0 | 147 | 0.0386 | 1.0 |
| 0.0079 | 50.0 | 150 | 0.0386 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-2
|
SetFit
| 2022-02-09T20:21:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3081
- Accuracy: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7146 | 1.0 | 3 | 0.6798 | 0.75 |
| 0.6737 | 2.0 | 6 | 0.6847 | 0.75 |
| 0.6519 | 3.0 | 9 | 0.6783 | 0.75 |
| 0.6105 | 4.0 | 12 | 0.6812 | 0.25 |
| 0.5463 | 5.0 | 15 | 0.6869 | 0.25 |
| 0.4922 | 6.0 | 18 | 0.6837 | 0.5 |
| 0.4543 | 7.0 | 21 | 0.6716 | 0.5 |
| 0.3856 | 8.0 | 24 | 0.6613 | 0.75 |
| 0.3475 | 9.0 | 27 | 0.6282 | 0.75 |
| 0.2717 | 10.0 | 30 | 0.6045 | 0.75 |
| 0.2347 | 11.0 | 33 | 0.5620 | 0.75 |
| 0.1979 | 12.0 | 36 | 0.5234 | 1.0 |
| 0.1535 | 13.0 | 39 | 0.4771 | 1.0 |
| 0.1332 | 14.0 | 42 | 0.4277 | 1.0 |
| 0.1041 | 15.0 | 45 | 0.3785 | 1.0 |
| 0.082 | 16.0 | 48 | 0.3318 | 1.0 |
| 0.0672 | 17.0 | 51 | 0.2885 | 1.0 |
| 0.0538 | 18.0 | 54 | 0.2568 | 1.0 |
| 0.0412 | 19.0 | 57 | 0.2356 | 1.0 |
| 0.0361 | 20.0 | 60 | 0.2217 | 1.0 |
| 0.0303 | 21.0 | 63 | 0.2125 | 1.0 |
| 0.0268 | 22.0 | 66 | 0.2060 | 1.0 |
| 0.0229 | 23.0 | 69 | 0.2015 | 1.0 |
| 0.0215 | 24.0 | 72 | 0.1989 | 1.0 |
| 0.0211 | 25.0 | 75 | 0.1969 | 1.0 |
| 0.0172 | 26.0 | 78 | 0.1953 | 1.0 |
| 0.0165 | 27.0 | 81 | 0.1935 | 1.0 |
| 0.0132 | 28.0 | 84 | 0.1923 | 1.0 |
| 0.0146 | 29.0 | 87 | 0.1914 | 1.0 |
| 0.0125 | 30.0 | 90 | 0.1904 | 1.0 |
| 0.0119 | 31.0 | 93 | 0.1897 | 1.0 |
| 0.0122 | 32.0 | 96 | 0.1886 | 1.0 |
| 0.0118 | 33.0 | 99 | 0.1875 | 1.0 |
| 0.0097 | 34.0 | 102 | 0.1866 | 1.0 |
| 0.0111 | 35.0 | 105 | 0.1861 | 1.0 |
| 0.0111 | 36.0 | 108 | 0.1855 | 1.0 |
| 0.0102 | 37.0 | 111 | 0.1851 | 1.0 |
| 0.0109 | 38.0 | 114 | 0.1851 | 1.0 |
| 0.0085 | 39.0 | 117 | 0.1854 | 1.0 |
| 0.0089 | 40.0 | 120 | 0.1855 | 1.0 |
| 0.0092 | 41.0 | 123 | 0.1863 | 1.0 |
| 0.0105 | 42.0 | 126 | 0.1868 | 1.0 |
| 0.0089 | 43.0 | 129 | 0.1874 | 1.0 |
| 0.0091 | 44.0 | 132 | 0.1877 | 1.0 |
| 0.0096 | 45.0 | 135 | 0.1881 | 1.0 |
| 0.0081 | 46.0 | 138 | 0.1881 | 1.0 |
| 0.0086 | 47.0 | 141 | 0.1883 | 1.0 |
| 0.009 | 48.0 | 144 | 0.1884 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
tizaino/bert-base-uncased-finetuned-Pisa
|
tizaino
| 2022-02-09T18:49:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-Pisa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-Pisa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 1.4146 |
| No log | 2.0 | 18 | 1.1013 |
| No log | 3.0 | 27 | 1.1237 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/rwinshow
|
huggingtweets
| 2022-02-09T18:47:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rwinshow/1644432421342/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1254050485608333317/wm7H1qKs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">RwinShow</div>
<div style="text-align: center; font-size: 14px;">@rwinshow</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from RwinShow.
| Data | RwinShow |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 84 |
| Short tweets | 460 |
| Tweets kept | 2705 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1j7pso36/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rwinshow's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g39dgiy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g39dgiy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rwinshow')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Maunish/ecomm-sbert
|
Maunish
| 2022-02-09T17:47:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
justin871030/bert-base-uncased-goemotions-original-finetuned
|
justin871030
| 2022-02-09T17:17:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"go-emotion",
"text-classification",
"en",
"dataset:go_emotions",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- go-emotion
- text-classification
- pytorch
datasets:
- go_emotions
metrics:
- f1
widget:
- text: "Thanks for giving advice to the people who need it! 👌🙏"
license: mit
---
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 53%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions)
|
huggingtweets/man24car
|
huggingtweets
| 2022-02-09T16:06:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/man24car/1644422772686/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475950695329275905/8MOXbfHE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">FastCarMan24</div>
<div style="text-align: center; font-size: 14px;">@man24car</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from FastCarMan24.
| Data | FastCarMan24 |
| --- | --- |
| Tweets downloaded | 860 |
| Retweets | 211 |
| Short tweets | 159 |
| Tweets kept | 490 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2oq7rh5p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @man24car's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19d4nhfe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19d4nhfe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/man24car')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
navteca/multi-qa-mpnet-base-cos-v1
|
navteca
| 2022-02-09T14:55:14Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language: en
license: mit
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- sentence-transformers
---
# Multi QA MPNet base model for Semantic Search
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources.
This model uses [`mpnet-base`](https://huggingface.co/microsoft/mpnet-base).
## Training Data
We use the concatenation from multiple datasets to fine-tune this model. In total we have about 215M (question, answer) pairs. The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** |
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product, cosine-similarity, or euclidean distance |
Note: This model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import SentenceTransformer, util
question = "That is a happy person"
contexts = [
"That is a happy dog",
"That is a very happy person",
"Today is a sunny day"
]
# Load the model
model = SentenceTransformer('navteca//multi-qa-mpnet-base-cos-v1')
# Encode question and contexts
question_emb = model.encode(question)
contexts_emb = model.encode(contexts)
# Compute dot score between question and all contexts embeddings
result = util.dot_score(question_emb, contexts_emb)[0].cpu().tolist()
print(result)
#[
# 0.60806852579116820,
# 0.94949364662170410,
# 0.29836517572402954
#]
|
Plim/xls-r-300m-cv_8-fr
|
Plim
| 2022-02-09T13:59:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"fr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
model-index:
- name: XLS-R-300m - French
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: to recompute with STEP 24000
- name: Test CER
type: cer
value: to recompute with STEP 24000
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 35.29
- name: Test CER
type: cer
value: 13.94
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0 (extended to 7.0 with training with checkpoint)
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9114 | 0.29 | 1000 | inf | 0.9997 |
| 1.2436 | 0.57 | 2000 | inf | 0.4310 |
| 1.0552 | 0.86 | 3000 | inf | 0.3144 |
| 1.0044 | 1.15 | 4000 | inf | 0.2814 |
| 0.9718 | 1.43 | 5000 | inf | 0.2658 |
| 0.9502 | 1.72 | 6000 | inf | 0.2566 |
| 0.9418 | 2.01 | 7000 | inf | 0.2476 |
| 0.9215 | 2.29 | 8000 | inf | 0.2420 |
| 0.9236 | 2.58 | 9000 | inf | 0.2388 |
| 0.9014 | 2.87 | 10000 | inf | 0.2354 |
| 0.8814 | 3.15 | 11000 | inf | 0.2312 |
| 0.8809 | 3.44 | 12000 | inf | 0.2285 |
| 0.8717 | 3.73 | 13000 | inf | 0.2263 |
| 0.8787 | 4.01 | 14000 | inf | 0.2218 |
| 0.8567 | 4.3 | 15000 | inf | 0.2193 |
| 0.8488 | 4.59 | 16000 | inf | 0.2187 |
| 0.8359 | 4.87 | 17000 | inf | 0.2172 |
Training continued with checkpoint from STEP 17000:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| / | 5.16 | 18000 | inf | 0.2176 |
| / | 5.45 | 19000 | inf | 0.2181 |
| / | 5.73 | 20000 | inf | 0.2155 |
| / | 6.02 | 21000 | inf | 0.2140 |
| / | 6.31 | 22000 | inf | 0.2124 |
| / | 6.59 | 23000 | inf | 0.2117 |
| / | 6.88 | 24000 | inf | 0.2116 |
It achieves the best result on the validation set on Step 24000:
- Wer: 0.2116
Got some issue with validation loss calculation.
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8` with split `test`
```bash
python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
mrm8488/electricidad-small-finetuned-squadv1-es
|
mrm8488
| 2022-02-09T13:29:35Z | 23 | 1 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"question-answering",
"QA",
"SQuAD",
"es",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language: es
thumbnail: https://imgur.com/uxAvBfh
tags:
- QA
- SQuAD
---
# Electricidad small + Spanish SQuAD v1 ⚡❓
[Electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) fine-tuned on [Spanish SQUAD v1.1 dataset](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/master/SQuAD-es-v1.1) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Dataset 📚
[SQuAD-es-v1.1](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/master/SQuAD-es-v1.1)
| Dataset split | # Samples |
| ------------- | --------- |
| Train | 130 K |
| Test | 11 K |
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python /content/transformers/examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path 'mrm8488/electricidad-small-discriminator' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v1.1-es.json' \
--predict_file '/content/dataset/dev-v1.1-es.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/electricidad-small-finetuned-squadv1-es' \
--overwrite_output_dir \
--save_steps 1000
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **46.82** |
| **F1** | **64.79** |
```json
{
'exact': 46.82119205298013,
'f1': 64.79435260021918,
'total': 10570,
'HasAns_exact': 46.82119205298013,
HasAns_f1': 64.79435260021918,
'HasAns_total': 10570,
'best_exact': 46.82119205298013,
'best_exact_thresh': 0.0,
'best_f1': 64.79435260021918,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/electricidad-small-finetuned-squadv1-es",
tokenizer="mrm8488/electricidad-small-finetuned-squadv1-es"
)
context = "Manuel ha creado una versión del modelo Electra small en español que alcanza una puntuación F1 de 65 en el dataset SQUAD-es y sólo pesa 50 MB"
q1 = "Cuál es su marcador F1?"
q2 = "¿Cuál es el tamaño del modelo?"
q3 = "¿Quién lo ha creado?"
q4 = "¿Que es lo que ha hecho Manuel?"
questions = [q1, q2, q3, q4]
for question in questions:
result = qa_pipeline({
'context': context,
'question': question})
print(result)
# Output:
{'score': 0.14836778166355025, 'start': 98, 'end': 100, 'answer': '65'}
{'score': 0.32219420810758237, 'start': 136, 'end': 140, 'answer': '50 MB'}
{'score': 0.9672326951118713, 'start': 0, 'end': 6, 'answer': 'Manuel'}
{'score': 0.23552458113848118, 'start': 10, 'end': 53, 'answer': 'creado una versión del modelo Electra small'}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
victen/xlm-roberta-base-finetuned-panx-de
|
victen
| 2022-02-09T10:49:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8591260810195721
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.257 | 1.0 | 525 | 0.1512 | 0.8302 |
| 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 |
| 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ArBert/bert-base-uncased-finetuned-ner
|
ArBert
| 2022-02-09T10:46:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0905
- Precision: 0.9068
- Recall: 0.9200
- F1: 0.9133
- Accuracy: 0.9787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1266 | 1.0 | 1123 | 0.0952 | 0.8939 | 0.8869 | 0.8904 | 0.9742 |
| 0.0741 | 2.0 | 2246 | 0.0866 | 0.8936 | 0.9247 | 0.9089 | 0.9774 |
| 0.0496 | 3.0 | 3369 | 0.0905 | 0.9068 | 0.9200 | 0.9133 | 0.9787 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/redbirdrabbit
|
huggingtweets
| 2022-02-09T10:01:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/redbirdrabbit/1644400884100/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477886193098387457/_uzaENCR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Aura Irving BATTING FOR THE PANDEMONIUM ARTISTS</div>
<div style="text-align: center; font-size: 14px;">@redbirdrabbit</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Aura Irving BATTING FOR THE PANDEMONIUM ARTISTS.
| Data | Aura Irving BATTING FOR THE PANDEMONIUM ARTISTS |
| --- | --- |
| Tweets downloaded | 944 |
| Retweets | 203 |
| Short tweets | 142 |
| Tweets kept | 599 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/t44z5bql/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @redbirdrabbit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37dlriu5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37dlriu5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/redbirdrabbit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Duael/RRHood
|
Duael
| 2022-02-09T04:54:18Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: artistic-2.0
---
|
thyagosme/bert-base-cased-wikitext2
|
thyagosme
| 2022-02-09T03:44:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0902 | 1.0 | 2346 | 7.0492 |
| 6.9027 | 2.0 | 4692 | 6.8692 |
| 6.8553 | 3.0 | 7038 | 6.8882 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/abnuo113
|
huggingtweets
| 2022-02-09T01:13:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484369552498573313/MP-r9WvV_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹</div>
<div style="text-align: center; font-size: 14px;">@abnuo113</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹.
| Data | 𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹𒐹 |
| --- | --- |
| Tweets downloaded | 3213 |
| Retweets | 316 |
| Short tweets | 1545 |
| Tweets kept | 1352 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7huohook/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @abnuo113's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2j8kmobh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2j8kmobh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/abnuo113')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
|
vuiseng9
| 2022-02-08T22:58:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers.
2. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.7001
eval_f1 = 87.9777
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt
BASE_MODEL=/path/to/cloned_repo_above #to-revise
wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt/raw/main/nncf_bert_squad_qat.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=5
python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--optimize_model_before_eval \
--optimized_checkpoint $BASE_MODEL \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher bert-large-uncased-whole-word-masking-finetuned-squad \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--optimize_model_before_eval \
--qat_checkpoint $MODELROOT/checkpoint-26750 \
--nncf_config $MODELROOT/nncf_bert_squad_qat.json \
--to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
### tile-alignment
to evaluate tile-alignment checkpoint, add ```--tile_alignment``` and point ```--qat_checkpoint``` to checkpoint with 'tilealigned' postfix. Use branch ```tld-poc``` with commit id ```c525c52cq```
|
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt
|
vuiseng9
| 2022-02-08T22:58:08Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. magnitude sparsification at 57.92% upon initialization so that sparsity over all linear layers of bert-base is at 90%. Parameters are ranked globally via thier absolute norm. Only linear layers of self-attention and ffnn are targeted.
2. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers.
3. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.4541
eval_f1 = 87.6832
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt
BASE_MODEL=/path/to/cloned_repo_above #to-revise
wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt/raw/main/nncf_bert_squad_sparsity.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=5
python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--optimize_model_before_eval \
--optimized_checkpoint $BASE_MODEL \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher bert-large-uncased-whole-word-masking-finetuned-squad \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--optimize_model_before_eval \
--qat_checkpoint $MODELROOT/checkpoint-21750 \
--nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \
--to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
### tile-alignment
to evaluate tile-alignment checkpoint, add ```--tile_alignment``` and point ```--qat_checkpoint``` to checkpoint with 'tilealigned' postfix. Use branch ```tld-poc``` with commit id ```c525c52cq```
|
jgammack/MTL-bert-base-uncased-ww-squad
|
jgammack
| 2022-02-08T22:16:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: MTL-bert-base-uncased-ww-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased-ww-squad
This model is a fine-tuned version of [jgammack/MTL-bert-base-uncased-ww](https://huggingface.co/jgammack/MTL-bert-base-uncased-ww) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix
|
espnet
| 2022-02-08T18:36:12Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"speech-translation",
"dataset:iwslt22_dialect",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- speech-translation
language: noinfo
datasets:
- iwslt22_dialect
license: cc-by-4.0
---
## ESPnet2 ST model
### `espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/st1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix
```
<!-- Generated by scripts/utils/show_st_results.sh -->
# RESULTS
## Environments
- date: `Tue Feb 8 13:29:21 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14`
- Commit date: `Tue Feb 8 10:48:10 2022 -0500`
## st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp
### BLEU
|dataset|bleu_score|verbose_score|
|---|---|---|
p3_st_model_valid.acc.ave|12.0|37.4/17.3/8.6/4.5 (BP = 0.952 ratio = 0.953 hyp_len = 40192 ref_len = 42181)
## ST config
<details><summary>expand</summary>
```
config: conf/tuning/transformer_fisherlike_4gpu_bbins16m_fix.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 36641
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/st_stats_raw_bpe1000_sp/train/speech_shape
- exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe
valid_shape_file:
- exp/st_stats_raw_bpe1000_sp/valid/speech_shape
- exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /scratch/iwslt22dump//raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22dump//raw/train_sp/text.tc.en
- text
- text
- - /scratch/iwslt22dump//raw/train_sp/text.tc.rm.ta
- src_text
- text
valid_data_path_and_name_and_type:
- - /scratch/iwslt22dump//raw/dev/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22dump//raw/dev/text.tc.en
- text
- text
- - /scratch/iwslt22dump//raw/dev/text.tc.rm.ta
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 12.5
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- s
- ▁
- apo
- '&'
- ;
- ▁i
- ▁you
- t
- ▁it
- ▁the
- ▁and
- ▁to
- ▁that
- ▁a
- n
- a
- ▁he
- ▁me
- m
- d
- ▁yes
- ▁she
- ▁no
- ▁in
- ▁what
- ▁for
- ▁we
- ing
- ll
- ▁they
- re
- ▁are
- ▁did
- ▁god
- ▁is
- e
- ed
- ▁so
- ▁her
- ▁do
- ▁have
- ▁of
- ▁with
- ▁go
- ▁know
- ▁not
- ▁was
- ▁on
- ▁don
- y
- ▁him
- ▁one
- ▁like
- ▁there
- '%'
- ▁pw
- ▁be
- ▁at
- ▁told
- ▁good
- ▁will
- ▁my
- ▁all
- ▁or
- c
- er
- p
- ▁how
- ▁ah
- r
- ▁but
- ▁them
- ▁see
- ▁get
- ▁can
- i
- ▁when
- ▁going
- ▁about
- ▁mean
- ▁this
- k
- ▁your
- ▁by
- ▁if
- u
- ▁come
- ▁up
- ▁tell
- g
- ▁said
- ▁then
- ▁now
- ▁yeah
- o
- ▁out
- al
- ra
- ▁because
- ▁time
- ▁well
- ▁would
- ▁p
- ▁from
- h
- ar
- f
- ▁swear
- ▁went
- b
- ▁really
- or
- ▁want
- ri
- ▁home
- ▁work
- ve
- ▁take
- ▁got
- ▁just
- l
- ▁uh
- ▁why
- en
- ▁even
- ▁am
- ▁who
- ▁make
- ▁day
- '-'
- in
- ▁something
- ▁some
- ou
- ▁us
- ▁okay
- ▁where
- ▁does
- ▁has
- ▁thank
- ▁c
- ▁his
- th
- ▁back
- ▁fine
- ▁today
- ly
- ▁b
- ▁oh
- ▁doing
- ▁everything
- ▁here
- le
- ▁thing
- ▁two
- ▁anyway
- li
- ▁had
- ▁still
- ▁say
- ro
- ▁after
- ce
- ▁hello
- ▁ma
- ▁call
- w
- ▁listen
- il
- ▁should
- ▁girl
- ▁f
- z
- ▁too
- ▁let
- ▁understand
- ▁may
- ▁much
- ▁think
- ch
- ir
- ha
- ▁other
- ▁tomorrow
- ▁were
- ▁people
- es
- ▁year
- di
- ba
- ▁right
- el
- ▁things
- ▁house
- v
- ▁actually
- un
- ▁an
- ▁give
- ▁only
- ▁better
- pe
- ▁need
- ▁buy
- ▁de
- ne
- ▁ha
- ur
- ion
- ▁made
- la
- ▁willing
- ▁nothing
- ▁called
- ▁night
- ▁yesterday
- se
- ▁came
- ▁lot
- ter
- ▁g
- po
- ▁find
- ry
- ▁car
- ▁over
- ic
- ▁stay
- ▁eat
- ent
- ▁always
- ▁very
- 'on'
- ▁put
- ▁ramadan
- ▁those
- ▁hear
- is
- ▁talk
- ▁three
- ▁anything
- ▁mo
- ▁little
- ▁been
- ▁already
- fi
- ation
- ke
- ▁first
- ▁look
- it
- ▁won
- ▁mom
- ▁way
- ▁before
- ▁ok
- ▁last
- fa
- ▁cook
- vi
- ▁hi
- ▁same
- ▁thought
- ▁also
- um
- ate
- ▁money
- ▁start
- ▁place
- us
- ▁morning
- ▁could
- ▁ask
- ▁bring
- ▁bit
- ▁lo
- ▁leave
- ▁man
- ▁left
- ine
- ▁days
- ge
- ▁la
- ▁week
- ▁friend
- ▁problem
- ▁sister
- ▁allah
- ▁feel
- ▁every
- ▁more
- fe
- ▁long
- ▁hundred
- ▁j
- ▁eh
- ho
- ca
- em
- ▁talking
- ▁exam
- ▁next
- ▁new
- ▁fun
- ▁took
- ▁alright
- co
- ▁w
- ▁um
- ▁eid
- ▁brother
- ▁our
- gh
- ow
- ▁o
- ▁four
- ni
- wa
- ▁else
- ▁finish
- bo
- ▁sleep
- ▁bless
- ▁dear
- ▁since
- ▁play
- ▁name
- hi
- ▁coming
- ▁many
- et
- ▁usual
- ▁con
- ▁maybe
- ▁off
- bi
- ▁than
- ▁any
- ▁mother
- ▁son
- om
- ▁their
- ▁keep
- ▁dinner
- ▁ten
- ▁half
- ▁help
- ▁bad
- and
- ▁pass
- ▁hot
- ▁guy
- ▁least
- ▁down
- ▁bought
- ▁dinars
- ▁working
- ▁around
- ▁normal
- ▁poor
- ▁stuff
- ▁hope
- ▁used
- ▁again
- ▁bro
- ul
- ▁phone
- ▁ex
- ▁done
- ▁six
- ▁na
- ▁month
- ▁tired
- ▁check
- ▁show
- ▁together
- oo
- ▁later
- ▁past
- ▁five
- ▁watch
- ya
- ▁coffee
- ment
- ut
- ▁plan
- ▁great
- ▁daughter
- j
- ▁another
- side
- ▁change
- ▁yet
- ting
- ▁until
- ▁honestly
- ▁whole
- ol
- ▁care
- ▁sure
- able
- id
- ▁big
- ▁spend
- ▁exactly
- ▁boy
- ▁course
- ▁end
- ▁please
- ▁started
- he
- up
- ▁found
- ▁saw
- ▁family
- ▁asked
- ▁enough
- ▁during
- ▁rest
- ▁which
- ▁gave
- ▁true
- ▁while
- ▁job
- ▁el
- ▁each
- ▁away
- ▁kids
- ▁goes
- less
- ▁twenty
- ▁eight
- ▁someone
- ▁cha
- ▁clothes
- ah
- ▁myself
- ▁nice
- ▁late
- ▁old
- ▁real
- age
- ant
- ▁fast
- ▁add
- ▁hard
- ▁these
- ful
- im
- ▁close
- ive
- ▁dad
- ▁pay
- ies
- ▁dude
- ▁alone
- ▁far
- ance
- ▁dis
- ▁seven
- ▁isn
- ▁pro
- our
- ▁thousand
- ▁break
- ▁hour
- ▁wait
- ▁brought
- ▁open
- ▁un
- ▁wedding
- ▁walk
- ▁father
- ▁ka
- ▁second
- x
- ▁saturday
- ▁salad
- ▁win
- ▁everyone
- ▁water
- ▁tunis
- ▁remember
- ity
- ▁wake
- ▁minute
- ▁school
- ▁sunday
- ▁own
- ▁shop
- ▁cold
- ▁meet
- ▁wear
- ever
- ▁send
- ▁early
- ▁gra
- tic
- ▁short
- ▁use
- ▁sometimes
- hou
- ▁love
- ▁prepare
- ▁sea
- ▁study
- ure
- ▁com
- qui
- ▁hand
- ▁both
- ja
- ▁summer
- ▁wrong
- ▁wanted
- che
- ▁miss
- ▁try
- ▁iftar
- ▁yourself
- q
- ▁live
- war
- ▁expensive
- ▁getting
- ▁waiting
- ▁once
- ▁kh
- ▁forgot
- ▁nine
- ▁anymore
- ▁soup
- ▁uncle
- ▁beach
- ▁saying
- ▁into
- ▁having
- ▁brik
- ▁room
- ▁food
- ▁visit
- ▁matter
- ▁thirty
- ▁taking
- ▁rain
- ▁aunt
- ▁never
- ▁pick
- ▁tunisia
- ▁health
- ▁head
- ▁cut
- ▁fasting
- ▁sick
- ▁friday
- ▁forget
- ▁monday
- ▁become
- ▁dress
- ated
- ▁most
- wi
- ▁hang
- ▁life
- ▁fish
- ▁happy
- ▁delicious
- ▁deal
- ▁finished
- ble
- ▁studying
- ▁weather
- ▁making
- ▁cost
- ▁bl
- ▁stayed
- ▁guess
- ▁teach
- ▁stop
- ▁near
- ▁watching
- ▁without
- ▁imagine
- ▁seriously
- fl
- ▁speak
- ▁idea
- ▁must
- ▁normally
- ▁turn
- ize
- ▁clean
- ▁tv
- ▁meat
- ▁woke
- ▁example
- ▁easy
- ▁sent
- ▁sell
- over
- ▁fifty
- ▁amazing
- ▁beautiful
- ▁whatever
- ▁enjoy
- ▁talked
- ▁believe
- ▁thinking
- ▁count
- ▁almost
- ▁longer
- ▁afternoon
- ▁hair
- ▁front
- ▁earlier
- ▁mind
- ▁kind
- ▁tea
- ▁best
- ▁rent
- ▁picture
- ▁cooked
- ▁price
- ight
- ▁soon
- ▁woman
- ▁otherwise
- ▁happened
- ▁story
- ▁luck
- ▁high
- ▁happen
- ▁arrive
- ▁paper
- ga
- ▁quickly
- ▁looking
- ub
- ▁number
- ▁staying
- ▁sit
- man
- ack
- ▁important
- ▁either
- ▁person
- ▁small
- ▁free
- ▁crazy
- ▁playing
- ▁kept
- ▁part
- ▁game
- law
- ▁till
- uck
- ▁ready
- ▁might
- ▁gone
- ▁full
- ▁fix
- ▁subject
- ▁laugh
- ▁doctor
- ▁welcome
- ▁eleven
- ▁sleeping
- ▁heat
- ▁probably
- ▁such
- ▁café
- ▁fat
- ▁sweet
- ▁married
- ▁drink
- ▁move
- ▁outside
- ▁especially
- ▁group
- ji
- ▁market
- ▁through
- ▁train
- ▁protect
- ▁turned
- ▁red
- ▁busy
- ▁light
- ▁noise
- ▁street
- ▁manage
- ▁piece
- ▁sitting
- gue
- ▁sake
- ▁party
- ish
- ▁young
- ▁case
- ▁cool
- huh
- ▁marwa
- ▁drive
- ▁pray
- clock
- ▁couscous
- ▁spent
- ▁felt
- ▁hopefully
- ▁everybody
- ▁living
- ▁pain
- line
- ▁between
- ▁match
- ▁prayer
- que
- ian
- ▁facebook
- ▁spi
- ▁eye
- ▁children
- ▁tonight
- ▁mohamed
- ▁understood
- ▁black
- ▁husband
- ▁rid
- ▁kitchen
- ▁face
- ▁swim
- ▁kid
- ▁invite
- ▁cup
- ▁grilled
- ▁wife
- ▁cousin
- ▁drop
- ▁wow
- ▁table
- ▁du
- ▁bored
- ▁neighborhood
- ▁agree
- ▁bread
- ▁hamma
- ▁straight
- ▁tuesday
- ▁anyone
- ▁lunch
- ade
- ▁himself
- ▁gather
- ▁wish
- ▁fifteen
- ▁wednesday
- ▁die
- ▁thursday
- ▁color
- ▁asleep
- ▁different
- ▁whether
- ▁ago
- ▁middle
- ▁class
- ▁cake
- shirt
- ▁fight
- ▁clear
- ▁test
- ▁plus
- ▁sousse
- ▁beginning
- ▁result
- ▁learn
- ▁crowded
- ▁slept
- ▁shoes
- ▁august
- ▁pretty
- ▁white
- ▁apparently
- ▁reach
- ▁mariem
- ▁return
- ▁road
- ▁million
- ▁stand
- ▁paid
- ▁word
- ious
- ▁few
- ▁breakfast
- ▁post
- ▁kilo
- ▁chicken
- ▁grade
- ▁read
- ▁accept
- ▁birthday
- ▁exhaust
- ▁point
- ▁july
- ▁patience
- ▁studies
- ▁trouble
- ▁along
- ▁worry
- ▁follow
- ▁hurt
- ▁afraid
- ▁trip
- ▁ahmed
- ▁remain
- ▁succeed
- ▁mercy
- ▁difficult
- ▁weekend
- ▁answer
- ▁cheap
- ▁repeat
- ▁auntie
- ▁sign
- ▁hold
- ▁under
- ▁olive
- ▁mahdi
- ▁sfax
- ▁annoy
- ▁dishes
- ▁message
- ▁business
- ▁french
- ▁serious
- ▁travel
- ▁office
- ▁wonder
- ▁student
- ▁internship
- ▁pepper
- ▁knew
- ▁kill
- ▁sauce
- ▁herself
- ▁hammamet
- ▁damn
- ▁mix
- ▁suit
- ▁medicine
- ▁remove
- ▁gonna
- ▁company
- ▁quarter
- ▁shopping
- ▁correct
- ▁throw
- ▁grow
- ▁voice
- ▁series
- gotten
- ▁taste
- ▁driving
- ▁hospital
- ▁sorry
- ▁aziz
- ▁milk
- ▁green
- ▁baccalaureate
- ▁running
- ▁lord
- ▁explain
- ▁angry
- ▁build
- ▁fruit
- ▁photo
- é
- ▁crying
- ▁baby
- ▁store
- ▁project
- ▁france
- ▁twelve
- ▁decide
- ▁swimming
- ▁world
- ▁preparing
- ▁special
- ▁session
- ▁behind
- ▁vegetable
- ▁strong
- ▁fatma
- ▁treat
- ▁cream
- ▁situation
- ▁settle
- ▁totally
- ▁stopped
- ▁book
- ▁honest
- ▁solution
- ▁vacation
- ▁cheese
- ▁ahead
- ▁sami
- ▁focus
- ▁scared
- ▁club
- ▁consider
- ▁final
- ▁naturally
- ▁barely
- ▁issue
- ▁floor
- ▁birth
- ▁almighty
- ▁engagement
- ▁blue
- ▁empty
- ▁soccer
- ▁prophet
- ▁ticket
- ▁indeed
- ▁write
- ▁present
- ▁patient
- ▁available
- ▁holiday
- ▁leaving
- ▁became
- ▁reason
- ▁apart
- ▁impossible
- ▁shame
- ▁worried
- ▁body
- ▁continue
- ▁program
- ▁stress
- ▁arabic
- ▁round
- ▁taxi
- ▁transport
- ▁third
- ▁certain
- ▁downstairs
- ▁neighbor
- ▁directly
- ▁giving
- ▁june
- ▁mini
- ▁upstairs
- ▁mistake
- ▁period
- ▁catch
- ▁buddy
- ▁success
- ▁tajine
- ▁excuse
- ▁organize
- ▁question
- ▁suffer
- ▁remind
- ▁university
- ▁downtown
- ▁sugar
- ▁twice
- ▁women
- ▁couple
- ▁everyday
- ▁condition
- ▁obvious
- ▁nobody
- ▁complete
- ▁stomach
- ▁account
- ▁september
- ▁choose
- ▁bottle
- ▁figure
- ▁instead
- ▁salary
- '0'
- '1'
- '3'
- '2'
- '5'
- '7'
- '4'
- '9'
- '8'
- /
- °
- '6'
- è
- $
- ï
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
asr_weight: 0.3
mt_weight: 0.0
mtlalpha: 1.0
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
src_token_type: bpe
bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model
src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
extra_asr_decoder: transformer
extra_asr_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
extra_mt_decoder: transformer
extra_mt_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- src_token_list
- token_list
version: 0.10.6a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Mofe/speech-sprint-test
|
Mofe
| 2022-02-08T18:32:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 207.6065
- Wer: 1.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
jgammack/MTL-bert-base-uncased-ww
|
jgammack
| 2022-02-08T17:50:13Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-bert-base-uncased-ww
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased-ww
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2964 | 1.0 | 99 | 2.9560 |
| 3.0419 | 2.0 | 198 | 2.8336 |
| 2.8979 | 3.0 | 297 | 2.8009 |
| 2.8815 | 4.0 | 396 | 2.7394 |
| 2.8373 | 5.0 | 495 | 2.6813 |
| 2.741 | 6.0 | 594 | 2.6270 |
| 2.6877 | 7.0 | 693 | 2.5216 |
| 2.6823 | 8.0 | 792 | 2.5485 |
| 2.6326 | 9.0 | 891 | 2.5690 |
| 2.5976 | 10.0 | 990 | 2.6336 |
| 2.6009 | 11.0 | 1089 | 2.5919 |
| 2.5615 | 12.0 | 1188 | 2.4264 |
| 2.5826 | 13.0 | 1287 | 2.5562 |
| 2.5693 | 14.0 | 1386 | 2.5529 |
| 2.5494 | 15.0 | 1485 | 2.5300 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
tau/tavbert-he
|
tau
| 2022-02-08T16:38:50Z | 60 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"language model",
"he",
"dataset:oscar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: he
tags:
- roberta
- language model
datasets:
- oscar
---
# TavBERT base model
A Hebrew BERT-style masked language model operating over characters, pre-trained by masking spans of characters, similarly to SpanBERT (Joshi et al., 2020).
### How to use
```python
import numpy as np
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("tau/tavbert-he")
tokenizer = AutoTokenizer.from_pretrained("tau/tavbert-he")
def mask_sentence(sent, span_len=5):
start_pos = np.random.randint(0, len(sent) - span_len)
masked_sent = sent[:start_pos] + '[MASK]' * span_len + sent[start_pos + span_len:]
print("Masked sentence:", masked_sent)
output = model(**tokenizer.encode_plus(masked_sent,
return_tensors='pt'))['logits'][0][1:-1]
preds = [int(x) for x in torch.argmax(torch.softmax(output, axis=1), axis=1)[start_pos:start_pos + span_len]]
pred_sent = sent[:start_pos] + ''.join(tokenizer.convert_ids_to_tokens(preds)) + sent[start_pos + span_len:]
print("Model's prediction:", pred_sent)
```
## Training data
OSCAR (Ortiz, 2019) Hebrew section (10 GB text, 20 million sentences).
|
jgammack/MTL-distilbert-base-uncased-squad
|
jgammack
| 2022-02-08T15:58:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: MTL-distilbert-base-uncased-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-distilbert-base-uncased-squad
This model is a fine-tuned version of [jgammack/MTL-distilbert-base-uncased](https://huggingface.co/jgammack/MTL-distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jhonparra18/wav2vec2-xls-r-300m-spanish-large-noLM
|
jhonparra18
| 2022-02-08T13:27:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"es",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- "es"
- "robust-speech-event"
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-large
This model is a fine-tuned version of [tomascufaro/xls-r-es-test](https://huggingface.co/tomascufaro/xls-r-es-test) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1431
- Wer: 0.1197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1769 | 0.15 | 400 | 0.1795 | 0.1698 |
| 0.217 | 0.3 | 800 | 0.2000 | 0.1945 |
| 0.2372 | 0.45 | 1200 | 0.1985 | 0.1859 |
| 0.2351 | 0.6 | 1600 | 0.1901 | 0.1772 |
| 0.2269 | 0.75 | 2000 | 0.1968 | 0.1783 |
| 0.2284 | 0.9 | 2400 | 0.1873 | 0.1771 |
| 0.2014 | 1.06 | 2800 | 0.1840 | 0.1696 |
| 0.1988 | 1.21 | 3200 | 0.1904 | 0.1730 |
| 0.1919 | 1.36 | 3600 | 0.1827 | 0.1630 |
| 0.1919 | 1.51 | 4000 | 0.1788 | 0.1629 |
| 0.1817 | 1.66 | 4400 | 0.1755 | 0.1558 |
| 0.1812 | 1.81 | 4800 | 0.1795 | 0.1638 |
| 0.1808 | 1.96 | 5200 | 0.1762 | 0.1603 |
| 0.1625 | 2.11 | 5600 | 0.1721 | 0.1557 |
| 0.1477 | 2.26 | 6000 | 0.1735 | 0.1504 |
| 0.1508 | 2.41 | 6400 | 0.1708 | 0.1478 |
| 0.157 | 2.56 | 6800 | 0.1644 | 0.1466 |
| 0.1491 | 2.71 | 7200 | 0.1638 | 0.1445 |
| 0.1458 | 2.86 | 7600 | 0.1582 | 0.1426 |
| 0.1387 | 3.02 | 8000 | 0.1607 | 0.1376 |
| 0.1269 | 3.17 | 8400 | 0.1559 | 0.1364 |
| 0.1172 | 3.32 | 8800 | 0.1521 | 0.1335 |
| 0.1203 | 3.47 | 9200 | 0.1534 | 0.1330 |
| 0.1177 | 3.62 | 9600 | 0.1485 | 0.1304 |
| 0.1167 | 3.77 | 10000 | 0.1498 | 0.1302 |
| 0.1194 | 3.92 | 10400 | 0.1463 | 0.1287 |
| 0.1053 | 4.07 | 10800 | 0.1483 | 0.1282 |
| 0.098 | 4.22 | 11200 | 0.1498 | 0.1267 |
| 0.0958 | 4.37 | 11600 | 0.1461 | 0.1233 |
| 0.0946 | 4.52 | 12000 | 0.1444 | 0.1218 |
| 0.094 | 4.67 | 12400 | 0.1434 | 0.1206 |
| 0.0932 | 4.82 | 12800 | 0.1424 | 0.1206 |
| 0.0912 | 4.98 | 13200 | 0.1431 | 0.1197 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
tesemnikov-av/rubert-ner-toxicity
|
tesemnikov-av
| 2022-02-08T12:52:32Z | 80 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
widget:
- text: "Ну ты и придурок!!"
---
NER Toxic models
Fine-tuning [cointegrated/rubert-tiny-toxicity](https://huggingface.co/cointegrated/rubert-tiny-toxicity) model on data from [toxic_dataset_ner](https://huggingface.co/datasets/tesemnikov-av/toxic_dataset_ner)
language: RU
```python
!pip install transformers > /dev/null
from transformers import (
AutoModelForTokenClassification,
AutoTokenizer,
pipeline
)
model = AutoModelForTokenClassification.from_pretrained('tesemnikov-av/rubert-ner-toxicity')
tokenizer = AutoTokenizer.from_pretrained('tesemnikov-av/rubert-ner-toxicity')
pipe = pipeline(model=model, tokenizer=tokenizer, task='ner', aggregation_strategy='average')
text = "Они охриневшие там все придурки!!"
print(text)
print(pipe(text))
```
|
imfiba1991/gpt2-wikitext2
|
imfiba1991
| 2022-02-08T10:53:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 13 | 8.1476 |
| No log | 2.0 | 26 | 7.4435 |
| No log | 3.0 | 39 | 7.2082 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
edugp/wav2vec2-xls-r-300m-cv8-es
|
edugp
| 2022-02-08T08:57:24Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-cv8-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-cv8-es
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2115
- eval_wer: 0.1931
- eval_runtime: 859.964
- eval_samples_per_second: 17.954
- eval_steps_per_second: 2.244
- epoch: 6.97
- step: 50000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
nickmuchi/fb-bart-large-finetuned-trade-the-event-finance-summarizer
|
nickmuchi
| 2022-02-08T08:52:54Z | 13 | 14 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fb-bart-large-finetuned-trade-the-event-finance-summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fb-bart-large-finetuned-trade-the-event-finance-summarizer
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5103
- Rouge1: 57.6289
- Rouge2: 53.0421
- Rougel: 56.54
- Rougelsum: 56.5636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.8188 | 1.0 | 1688 | 1.7495 | 37.9629 | 22.0496 | 32.2942 | 32.4631 |
| 1.2551 | 2.0 | 3376 | 1.7559 | 38.5548 | 22.7487 | 32.9304 | 33.0737 |
| 0.8629 | 3.0 | 5064 | 1.9539 | 39.3912 | 22.8503 | 33.2043 | 33.4378 |
| 0.5661 | 4.0 | 6752 | 2.1153 | 39.1514 | 22.8104 | 33.1306 | 33.2955 |
| 0.3484 | 5.0 | 8440 | 2.3289 | 39.0093 | 22.4364 | 32.5868 | 32.7545 |
| 0.2009 | 6.0 | 10128 | 2.5754 | 39.0874 | 22.4444 | 32.6894 | 32.8413 |
| 0.1105 | 7.0 | 11816 | 2.8093 | 39.0905 | 22.4051 | 32.597 | 32.8183 |
| 0.0609 | 8.0 | 13504 | 0.5103 | 57.6289 | 53.0421 | 56.54 | 56.5636 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
SetFit/deberta-v3-base__sst2__all-train
|
SetFit
| 2022-02-08T08:20:33Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-base__sst2__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base__sst2__all-train
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6964
- Accuracy: 0.49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.6964 | 0.49 |
| No log | 2.0 | 14 | 0.7010 | 0.49 |
| No log | 3.0 | 21 | 0.7031 | 0.49 |
| No log | 4.0 | 28 | 0.7054 | 0.49 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
jgammack/roberta-base-squad
|
jgammack
| 2022-02-08T07:39:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
hgharibi/wav2vec2-xls-r-300m-fa-colab
|
hgharibi
| 2022-02-08T05:54:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-fa-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-fa-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4404
- Wer: 0.4402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.083 | 0.75 | 300 | 3.0037 | 1.0 |
| 1.5795 | 1.5 | 600 | 0.9167 | 0.7638 |
| 0.658 | 2.25 | 900 | 0.5737 | 0.5595 |
| 0.4213 | 3.0 | 1200 | 0.4404 | 0.4402 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_final
|
LegolasTheElf
| 2022-02-08T04:27:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Openslr Multilingual",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- Openslr Multilingual
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: Wav2Vec2_xls_r_300m_hi_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jinlmsft/t5-large-slots
|
jinlmsft
| 2022-02-08T04:01:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-large-slots
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-slots
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0889
- Acc: 0.76
- True Num: 11167
- Num: 14748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | True Num | Num |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:--------:|:-----:|
| 0.3539 | 0.56 | 1000 | 0.2669 | 0.56 | 8264 | 14748 |
| 0.2523 | 1.13 | 2000 | 0.2031 | 0.56 | 8317 | 14748 |
| 0.2003 | 1.69 | 3000 | 0.1498 | 0.58 | 8496 | 14748 |
| 0.1609 | 2.25 | 4000 | 0.1284 | 0.58 | 8612 | 14748 |
| 0.1431 | 2.82 | 5000 | 0.1119 | 0.59 | 8675 | 14748 |
| 0.1236 | 3.38 | 6000 | 0.1054 | 0.59 | 8737 | 14748 |
| 0.1172 | 3.95 | 7000 | 0.0981 | 0.59 | 8773 | 14748 |
| 0.1027 | 4.51 | 8000 | 0.0955 | 0.6 | 8787 | 14748 |
| 0.0968 | 5.07 | 9000 | 0.0931 | 0.6 | 8807 | 14748 |
| 0.0911 | 5.64 | 10000 | 0.0895 | 0.6 | 8787 | 14748 |
| 0.0852 | 6.2 | 11000 | 0.0912 | 0.6 | 8840 | 14748 |
| 0.0823 | 6.76 | 12000 | 0.0880 | 0.6 | 8846 | 14748 |
| 0.0768 | 7.33 | 13000 | 0.0915 | 0.6 | 8879 | 14748 |
| 0.0758 | 7.89 | 14000 | 0.0892 | 0.6 | 8853 | 14748 |
| 0.0708 | 8.46 | 15000 | 0.0885 | 0.6 | 8884 | 14748 |
| 0.0701 | 9.02 | 16000 | 0.0884 | 0.6 | 8915 | 14748 |
| 0.0685 | 9.58 | 17000 | 0.0884 | 0.6 | 8921 | 14748 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ASCCCCCCCC/PENGMENGJIE-finetuned-emotion
|
ASCCCCCCCC
| 2022-02-08T03:32:48Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: PENGMENGJIE-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.9.0
- Pytorch 1.7.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
gagan3012/ViTGPT2I2A
|
gagan3012
| 2022-02-08T03:27:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-captioning",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-captioning
- generated_from_trainer
model-index:
- name: ViTGPT2I2A
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTGPT2I2A
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the vizwiz dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1528 | 0.17 | 1000 | 0.0869 |
| 0.0899 | 0.34 | 2000 | 0.0817 |
| 0.084 | 0.51 | 3000 | 0.0790 |
| 0.0814 | 0.68 | 4000 | 0.0773 |
| 0.0803 | 0.85 | 5000 | 0.0757 |
| 0.077 | 1.02 | 6000 | 0.0745 |
| 0.0739 | 1.19 | 7000 | 0.0740 |
| 0.0719 | 1.37 | 8000 | 0.0737 |
| 0.0717 | 1.54 | 9000 | 0.0730 |
| 0.0731 | 1.71 | 10000 | 0.0727 |
| 0.0708 | 1.88 | 11000 | 0.0720 |
| 0.0697 | 2.05 | 12000 | 0.0717 |
| 0.0655 | 2.22 | 13000 | 0.0719 |
| 0.0653 | 2.39 | 14000 | 0.0719 |
| 0.0657 | 2.56 | 15000 | 0.0712 |
| 0.0663 | 2.73 | 16000 | 0.0710 |
| 0.0654 | 2.9 | 17000 | 0.0708 |
| 0.0645 | 3.07 | 18000 | 0.0716 |
| 0.0616 | 3.24 | 19000 | 0.0712 |
| 0.0607 | 3.41 | 20000 | 0.0712 |
| 0.0611 | 3.58 | 21000 | 0.0711 |
| 0.0615 | 3.76 | 22000 | 0.0711 |
| 0.0614 | 3.93 | 23000 | 0.0710 |
| 0.0594 | 4.1 | 24000 | 0.0716 |
| 0.0587 | 4.27 | 25000 | 0.0715 |
| 0.0574 | 4.44 | 26000 | 0.0715 |
| 0.0579 | 4.61 | 27000 | 0.0715 |
| 0.0581 | 4.78 | 28000 | 0.0715 |
| 0.0579 | 4.95 | 29000 | 0.0715 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ccoreilly/wav2vec2-large-100k-voxpopuli-catala
|
ccoreilly
| 2022-02-08T00:59:52Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"speech-to-text",
"ca",
"dataset:common_voice",
"dataset:parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ca
datasets:
- common_voice
- parlament_parla
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- speech-to-text
license: apache-2.0
model-index:
- name: Catalan VoxPopuli Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice ca
type: common_voice
args: ca
- name: ParlamentParla
url: https://www.openslr.org/59/
metrics:
- name: Test WER
type: wer
value: 5.98
- name: Google Crowsourced Corpus WER
type: wer
value: 12.14
- name: Audiobook “La llegenda de Sant Jordi” WER
type: wer
value: 12.02
---
# Wav2Vec2-Large-100k-VoxPopuli-Català
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL:**
https://huggingface.co/softcatala/wav2vec2-large-100k-voxpopuli-catala
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets.
**Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model.
WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation.
You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala)
When using this model, make sure that your speech input is sampled at 16kHz.
## Results
Word error rate was evaluated on the following datasets unseen by the model:
| Dataset | WER |
| ------- | --- |
| [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% |
| [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% |
| Audiobook “La llegenda de Sant Jordi” | 12.02% |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
|
softcatala/wav2vec2-large-xlsr-catala
|
softcatala
| 2022-02-08T00:23:02Z | 82,658 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ca",
"dataset:common_voice",
"dataset:parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ca
datasets:
- common_voice
- parlament_parla
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Catalan XLSR Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice ca
type: common_voice
args: ca
- name: ParlamentParla
url: https://www.openslr.org/59/
metrics:
- name: Test WER
type: wer
value: 6.92
- name: Google Crowsourced Corpus WER
type: wer
value: 12.99
- name: Audiobook “La llegenda de Sant Jordi” WER
type: wer
value: 13.23
---
# Wav2Vec2-Large-XLSR-Català
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets.
**Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model.
WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv) which was not seen by the model during training/evaluation.
You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala)
When using this model, make sure that your speech input is sampled at 16kHz.
## Results
Word error rate was evaluated on the following datasets unseen by the model:
| Dataset | WER |
| ------- | --- |
| [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv)) | 6.92% |
| [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.99% |
| Audiobook “La llegenda de Sant Jordi” | 13.23% |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala")
model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
|
jgammack/MTL-bert-base-uncased
|
jgammack
| 2022-02-07T23:09:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4409 | 1.0 | 99 | 2.1982 |
| 2.2905 | 2.0 | 198 | 2.1643 |
| 2.1974 | 3.0 | 297 | 2.1168 |
| 2.15 | 4.0 | 396 | 2.0023 |
| 2.0823 | 5.0 | 495 | 2.0199 |
| 2.0752 | 6.0 | 594 | 1.9061 |
| 2.0408 | 7.0 | 693 | 1.9770 |
| 1.9984 | 8.0 | 792 | 1.9322 |
| 1.9933 | 9.0 | 891 | 1.9167 |
| 1.9806 | 10.0 | 990 | 1.9652 |
| 1.9436 | 11.0 | 1089 | 1.9308 |
| 1.9491 | 12.0 | 1188 | 1.9064 |
| 1.929 | 13.0 | 1287 | 1.8831 |
| 1.9096 | 14.0 | 1386 | 1.8927 |
| 1.9032 | 15.0 | 1485 | 1.9117 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Plim/test_lm
|
Plim
| 2022-02-07T23:08:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"fr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
model-index:
- name: XLS-R-1B - French
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: 18.33
- name: Test CER
type: cer
value: 5.60
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 60.25
- name: Test CER
type: cer
value: 15.68
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.9827 | 0.29 | 1000 | inf | 0.2937 |
| 1.0203 | 0.57 | 2000 | inf | 0.2711 |
| 1.0048 | 0.86 | 3000 | inf | 0.2620 |
| 0.9858 | 1.15 | 4000 | inf | 0.2522 |
| 0.9709 | 1.43 | 5000 | inf | 0.2365 |
| 0.9347 | 1.72 | 6000 | inf | 0.2332 |
| 0.9256 | 2.01 | 7000 | inf | 0.2261 |
| 0.8936 | 2.29 | 8000 | inf | 0.2203 |
| 0.877 | 2.58 | 9000 | inf | 0.2096 |
| 0.8393 | 2.87 | 10000 | inf | 0.2017 |
| 0.8156 | 3.15 | 11000 | inf | 0.1936 |
| 0.8015 | 3.44 | 12000 | inf | 0.1880 |
| 0.774 | 3.73 | 13000 | inf | 0.1834 |
It achieves the best result on the validation set on STEP 13000:
- Wer: 0.1834
Some problem occurs when calculating the validation loss.
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8` with split `test`
```bash
python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
jgammack/SAE-roberta-base
|
jgammack
| 2022-02-07T22:14:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: SAE-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9847 | 1.0 | 79 | 1.8238 |
| 1.9142 | 2.0 | 158 | 1.8299 |
| 1.8613 | 3.0 | 237 | 1.7636 |
| 1.8384 | 4.0 | 316 | 1.8048 |
| 1.8193 | 5.0 | 395 | 1.7734 |
| 1.7985 | 6.0 | 474 | 1.7271 |
| 1.7758 | 7.0 | 553 | 1.8525 |
| 1.7611 | 8.0 | 632 | 1.7716 |
| 1.7599 | 9.0 | 711 | 1.7913 |
| 1.7118 | 10.0 | 790 | 1.7578 |
| 1.7003 | 11.0 | 869 | 1.7598 |
| 1.7072 | 12.0 | 948 | 1.6942 |
| 1.6511 | 13.0 | 1027 | 1.6955 |
| 1.6802 | 14.0 | 1106 | 1.7837 |
| 1.7048 | 15.0 | 1185 | 1.7377 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
robot-test/old-clip-tokenizer
|
robot-test
| 2022-02-07T21:44:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
Old version of the CLIP fast tokenizer
cf [this issue](https://github.com/huggingface/transformers/issues/12648) on transformers
|
mrm8488/a2c-BreakoutNoFrameskip-v4
|
mrm8488
| 2022-02-07T20:45:11Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
#@title
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- ATARI
- Breakout
---
# A2C Breakout (No frame skip) v4 🤖🎮
This is a pre-trained model of a A2C agent playing Breakout (NoFrameskip-v4) using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library.
<video loop="" autoplay="" controls="" src="https://huggingface.co/mrm8488/a2c-BreakoutNoFrameskip-v4/resolve/main/output.mp4"></video>
### Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import A2C
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_atari_env
from stable_baselines3.common.env_util import make_vec_env
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="mrm8488/a2c-BreakoutNoFrameskip-v4", filename="a2c-BreakoutNoFrameskip-v4.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = make_atari_env('BreakoutNoFrameskip-v4')
eval_env = VecFrameStack(eval_env, n_stack=4)
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
### Evaluation Results
Mean_reward: mean_reward=242.40 +/- 98.97
|
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7
|
LegolasTheElf
| 2022-02-07T19:16:59Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Wav2Vec2_xls_r_300m_hi_cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Wer: 0.6273
- Cer: 0.2093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.6969 | 9.52 | 400 | 3.3092 | 1.0 | 0.9800 |
| 1.7721 | 19.05 | 800 | 0.7769 | 0.7045 | 0.2367 |
| 0.6384 | 28.57 | 1200 | 0.6567 | 0.6273 | 0.2093 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
elozano/tweet_emotion_eval
|
elozano
| 2022-02-07T18:04:47Z | 5 | 4 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "Stop sharing which songs did you listen to during this year on Spotify, NOBODY CARES"
example_title: "Anger"
- text: "I love that joke HAHAHAHAHA"
example_title: "Joy"
- text: "Despite I've not studied a lot for this exam, I think I will pass 😜"
example_title: "Optimism"
- text: "My dog died this morning..."
example_title: "Sadness"
---
|
elozano/tweet_offensive_eval
|
elozano
| 2022-02-07T17:59:03Z | 10 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "You're a complete idiot!"
example_title: "Offensive"
- text: "I am tired of studying for tomorrow's exam"
example_title: "Non-Offensive"
---
|
huggingtweets/cu_coquin
|
huggingtweets
| 2022-02-07T16:16:12Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/cu_coquin/1644250567283/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442129295477035013/15LSPrJo_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Manu’</div>
<div style="text-align: center; font-size: 14px;">@cu_coquin</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Manu’.
| Data | Manu’ |
| --- | --- |
| Tweets downloaded | 1982 |
| Retweets | 63 |
| Short tweets | 291 |
| Tweets kept | 1628 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jyazmuh8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cu_coquin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29a5jk2r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29a5jk2r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cu_coquin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shahukareem/wav2vec2-xls-r-300m-dv
|
shahukareem
| 2022-02-07T15:55:39Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 24.72
- name: Test CER
type: cer
value: 4.17
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Wer: 0.2451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.9623 | 0.66 | 400 | 3.3010 | 1.0 |
| 3.2238 | 1.33 | 800 | 2.8950 | 1.0 |
| 1.1988 | 1.99 | 1200 | 0.5277 | 0.6681 |
| 0.6084 | 2.65 | 1600 | 0.4113 | 0.5831 |
| 0.4973 | 3.32 | 2000 | 0.3538 | 0.5333 |
| 0.4476 | 3.98 | 2400 | 0.3201 | 0.5081 |
| 0.3999 | 4.64 | 2800 | 0.2917 | 0.4759 |
| 0.3779 | 5.31 | 3200 | 0.2788 | 0.4672 |
| 0.3457 | 5.97 | 3600 | 0.2667 | 0.4557 |
| 0.3222 | 6.63 | 4000 | 0.2549 | 0.4452 |
| 0.3129 | 7.3 | 4400 | 0.2491 | 0.4266 |
| 0.2927 | 7.96 | 4800 | 0.2488 | 0.4246 |
| 0.2786 | 8.62 | 5200 | 0.2429 | 0.4145 |
| 0.2756 | 9.29 | 5600 | 0.2453 | 0.4150 |
| 0.258 | 9.95 | 6000 | 0.2282 | 0.4109 |
| 0.251 | 10.61 | 6400 | 0.2307 | 0.4012 |
| 0.2397 | 11.28 | 6800 | 0.2275 | 0.4 |
| 0.2312 | 11.94 | 7200 | 0.2244 | 0.3889 |
| 0.2323 | 12.6 | 7600 | 0.2247 | 0.3983 |
| 0.216 | 13.27 | 8000 | 0.2301 | 0.3863 |
| 0.2169 | 13.93 | 8400 | 0.2224 | 0.3782 |
| 0.2089 | 14.59 | 8800 | 0.2276 | 0.3771 |
| 0.2042 | 15.26 | 9200 | 0.2286 | 0.3784 |
| 0.1953 | 15.92 | 9600 | 0.2235 | 0.3822 |
| 0.1876 | 16.58 | 10000 | 0.2267 | 0.3674 |
| 0.186 | 17.25 | 10400 | 0.2295 | 0.3676 |
| 0.1847 | 17.91 | 10800 | 0.2244 | 0.3608 |
| 0.178 | 18.57 | 11200 | 0.2229 | 0.3526 |
| 0.1751 | 19.24 | 11600 | 0.2219 | 0.3483 |
| 0.17 | 19.9 | 12000 | 0.2241 | 0.3503 |
| 0.1641 | 20.56 | 12400 | 0.2187 | 0.3403 |
| 0.1629 | 21.23 | 12800 | 0.2135 | 0.3433 |
| 0.1568 | 21.89 | 13200 | 0.2117 | 0.3358 |
| 0.1585 | 22.55 | 13600 | 0.2151 | 0.3332 |
| 0.1512 | 23.22 | 14000 | 0.2097 | 0.3344 |
| 0.1427 | 23.88 | 14400 | 0.2119 | 0.3255 |
| 0.1458 | 24.54 | 14800 | 0.2209 | 0.3213 |
| 0.1413 | 25.21 | 15200 | 0.2228 | 0.3202 |
| 0.1363 | 25.87 | 15600 | 0.2071 | 0.3207 |
| 0.1302 | 26.53 | 16000 | 0.2094 | 0.3138 |
| 0.1283 | 27.2 | 16400 | 0.2193 | 0.3132 |
| 0.1278 | 27.86 | 16800 | 0.2197 | 0.3103 |
| 0.1271 | 28.52 | 17200 | 0.2133 | 0.3009 |
| 0.1243 | 29.19 | 17600 | 0.2202 | 0.3026 |
| 0.1182 | 29.85 | 18000 | 0.2092 | 0.3046 |
| 0.1171 | 30.51 | 18400 | 0.2142 | 0.2947 |
| 0.1156 | 31.18 | 18800 | 0.2219 | 0.2926 |
| 0.1129 | 31.84 | 19200 | 0.2194 | 0.2848 |
| 0.1099 | 32.5 | 19600 | 0.2218 | 0.2869 |
| 0.1045 | 33.17 | 20000 | 0.2183 | 0.2803 |
| 0.1057 | 33.83 | 20400 | 0.2242 | 0.2896 |
| 0.1056 | 34.49 | 20800 | 0.2189 | 0.2838 |
| 0.1039 | 35.16 | 21200 | 0.2256 | 0.2819 |
| 0.1007 | 35.82 | 21600 | 0.2196 | 0.2743 |
| 0.1012 | 36.48 | 22000 | 0.2218 | 0.2752 |
| 0.098 | 37.15 | 22400 | 0.2181 | 0.2721 |
| 0.0963 | 37.81 | 22800 | 0.2162 | 0.2691 |
| 0.0943 | 38.47 | 23200 | 0.2148 | 0.2686 |
| 0.0959 | 39.14 | 23600 | 0.2194 | 0.2658 |
| 0.0904 | 39.8 | 24000 | 0.2170 | 0.2641 |
| 0.0898 | 40.46 | 24400 | 0.2129 | 0.2585 |
| 0.0886 | 41.13 | 24800 | 0.2199 | 0.2606 |
| 0.088 | 41.79 | 25200 | 0.2155 | 0.2595 |
| 0.0863 | 42.45 | 25600 | 0.2169 | 0.2564 |
| 0.0876 | 43.12 | 26000 | 0.2178 | 0.2529 |
| 0.0827 | 43.78 | 26400 | 0.2171 | 0.2559 |
| 0.087 | 44.44 | 26800 | 0.2192 | 0.2530 |
| 0.0818 | 45.11 | 27200 | 0.2180 | 0.2496 |
| 0.0811 | 45.77 | 27600 | 0.2207 | 0.2502 |
| 0.0828 | 46.43 | 28000 | 0.2186 | 0.2502 |
| 0.0796 | 47.1 | 28400 | 0.2203 | 0.2468 |
| 0.0804 | 47.76 | 28800 | 0.2201 | 0.2453 |
| 0.0791 | 48.42 | 29200 | 0.2204 | 0.2477 |
| 0.0777 | 49.09 | 29600 | 0.2197 | 0.2466 |
| 0.0775 | 49.75 | 30000 | 0.2206 | 0.2451 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ahmedrachid/FinancialBERT
|
ahmedrachid
| 2022-02-07T15:00:03Z | 178 | 27 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
widget:
- text: Tesla remains one of the highest [MASK] stocks on the market. Meanwhile, Aurora Innovation is a pre-revenue upstart that shows promise.
- text: Asian stocks [MASK] from a one-year low on Wednesday as U.S. share futures and oil recovered from the previous day's selloff, but uncertainty over the impact of the Omicron
- text: U.S. stocks were set to rise on Monday, led by [MASK] in Apple which neared $3 trillion in market capitalization, while investors braced for a Federal Reserve meeting later this week.
tags:
- fill-mask
---
**FinancialBERT** is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from it without the necessity of the significant computational resources required to train the model.
The model was trained on a large corpus of financial texts:
- *TRC2-financial*: 1.8M news articles that were published by Reuters between 2008 and 2010.
- *Bloomberg News*: 400,000 articles between 2006 and 2013.
- *Corporate Reports*: 192,000 transcripts (10-K & 10-Q)
- *Earning Calls*: 42,156 documents.
More details on `FinancialBERT` can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
> Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
|
huggingtweets/r2devops_io
|
huggingtweets
| 2022-02-07T14:42:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/r2devops_io/1644244942715/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1467763268559253504/kLy9pmCe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">R2Devops</div>
<div style="text-align: center; font-size: 14px;">@r2devops_io</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from R2Devops.
| Data | R2Devops |
| --- | --- |
| Tweets downloaded | 277 |
| Retweets | 57 |
| Short tweets | 4 |
| Tweets kept | 216 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mg7zs5q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @r2devops_io's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28hfbi0v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28hfbi0v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/r2devops_io')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TheDiamondKing/DialoGPT-small-harrypotter
|
TheDiamondKing
| 2022-02-07T14:13:13Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
# A Talking AI made with GPT2 trained with Harry Potter transcripts
## Currently working on Text to speech and speech recognition
## Likes to say "i'm not a wizard"
|
lgris/base_10k_8khz_pt
|
lgris
| 2022-02-07T11:53:39Z | 452 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# Wav2vec 2.0 for Portuguese in 8kHz
This is a fine-tuned model from [facebook/wav2vec2-base-10k-voxpopuli](https://huggingface.co/facebook/wav2vec2-base-10k-voxpopuli)
Datasets used to fine-tune the model:
CETUC: contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the CETEN-Folha corpus.
Common Voice 7.0: is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the oficial site.
Lapsbm: "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
Multilingual Librispeech (MLS): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like LibriVox. The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese used in this work (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
Multilingual TEDx: a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
Sidney (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
VoxForge: is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz
VoxPopuli
|
lgris/bp500-base100k_voxpopuli
|
lgris
| 2022-02-07T11:53:19Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"arxiv:2012.03411",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# bp500-base100k_voxpopuli: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt).
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
- [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
- [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 94.0h | -- | 5.4h |
| Common Voice | 37.8h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 148.9h | -- | 1.8h |
| SID | 7.2h | -- | 1.0h |
| VoxForge | 3.9h | -- | 0.1h |
| Total | 453.6h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/10iESR5AQxuxF5F7w3wLbpc_9YMsYbY9H/view?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_500-base100k_voxpopuli (demonstration below) | 0.142 | 0.201 | 0.052 | 0.224 | 0.102 | 0.317 | 0.048 | 0.155 |
| bp\_500-base100k_voxpopuli + 4-gram (demonstration below) | 0.099 | 0.149 | 0.047 | 0.192 | 0.115 | 0.371 | 0.127 | 0.157 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|qual o instagram dele|**qualo** **está** **gramedele**|
|o capitão foi expulso do exército porque era doido|o **capitãl** foi **exposo** do exército porque era doido|
|também por que não|também **porque** não|
|não existe tempo como o presente|não existe tempo como *o* presente|
|eu pulei para salvar rachel|eu pulei para salvar **haquel**|
|augusto cezar passos marinho|augusto **cesa** **passoesmarinho**|
## Demonstration
```python
MODEL_NAME = "lgris/bp500-base100k_voxpopuli"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
with torch.no_grad():
logits = self.model(input_values).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
```python
%cd bp_dataset
```
/content/bp_dataset
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.1419179499917191
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.20079950312040154
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.052780934343434324
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.22413887199364113
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1019041538671034
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.31711268778273327
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.04826433982683982
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.099518615112877
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.1488912889506362
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.047080176767676764
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.19220291966887196
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.11535498771650306
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.3707890073539895
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.12682088744588746
|
MarioPenguin/bert-model-english1
|
MarioPenguin
| 2022-02-07T11:31:41Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-model-english1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-model-english1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0274
- Train Accuracy: 0.9914
- Validation Loss: 0.3493
- Validation Accuracy: 0.9303
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0366 | 0.9885 | 0.3013 | 0.9299 | 0 |
| 0.0261 | 0.9912 | 0.3445 | 0.9351 | 1 |
| 0.0274 | 0.9914 | 0.3493 | 0.9303 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
victen/distilbert-base-uncased-finetuned-emotion
|
victen
| 2022-02-07T10:42:22Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236951195245434
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2265
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8243 | 1.0 | 250 | 0.3199 | 0.906 | 0.9025 |
| 0.2484 | 2.0 | 500 | 0.2265 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc_inference_only
|
deepdoctection
| 2022-02-07T10:33:04Z | 0 | 0 | null |
[
"Tensorflow",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- Tensorflow
license: apache-2.0
datasets:
- Pubtabnet
---
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc).
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"})
pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"])
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"]
build_train_config=["max_datapoints=500000","rows_and_cols=True"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=2000","rows_and_cols=True"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
|
willemjan/eng
|
willemjan
| 2022-02-07T09:23:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:cc-by-nc-sa-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-sa-3.0
---
|
willemjan/nl2
|
willemjan
| 2022-02-07T08:52:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-3.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.