modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
eagles/focus_sum_mT5_minshi | fcac39ad02f175ad63d25ce49868fe463a84c6b1 | 2022-04-21T04:23:12.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | eagles | null | eagles/focus_sum_mT5_minshi | 6 | null | transformers | 15,600 | ---
tags:
- generated_from_trainer
model-index:
- name: focus_sum_mT5_minshi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# focus_sum_mT5_minshi
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.268 | 83.33 | 500 | 0.0930 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ahmeddbahaa/mbart-large-50-finetuned-ar-wikilingua | b0c131ad85278ddd67ffa3e09509af1a64a262f4 | 2022-04-22T08:59:12.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | ahmeddbahaa | null | ahmeddbahaa/mbart-large-50-finetuned-ar-wikilingua | 6 | null | transformers | 15,601 | ---
tags:
- summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mbart-large-50-finetuned-ar-wikilingua
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-ar-wikilingua
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0001
- Rouge-1: 22.11
- Rouge-2: 7.33
- Rouge-l: 19.75
- Gen Len: 59.4
- Bertscore: 68.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 5.2671 | 1.0 | 5111 | 4.6414 | 18.37 | 5.63 | 16.32 | 96.39 | 65.12 |
| 4.5375 | 2.0 | 10222 | 4.3144 | 20.49 | 6.64 | 18.35 | 95.44 | 65.79 |
| 4.308 | 3.0 | 15333 | 4.1592 | 21.16 | 7.09 | 18.85 | 67.75 | 67.65 |
| 4.1562 | 4.0 | 20444 | 4.0812 | 21.59 | 7.31 | 19.42 | 68.66 | 68.02 |
| 4.0749 | 5.0 | 25555 | 4.0409 | 21.99 | 7.42 | 19.82 | 66.4 | 68.05 |
| 4.0271 | 6.0 | 30666 | 4.0183 | 22.04 | 7.42 | 19.64 | 56.88 | 68.95 |
| 3.9991 | 7.0 | 35777 | 4.0042 | 22.05 | 7.35 | 19.71 | 55.75 | 68.94 |
| 3.9833 | 8.0 | 40888 | 4.0001 | 22.12 | 7.39 | 19.78 | 55.72 | 69.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
seongwcom/distilbert-base-uncased-finetuned-emotion | 325c1ccbc92350e954de563a6384b3678f5ec7a3 | 2022-04-21T08:34:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | seongwcom | null | seongwcom/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,602 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230166540210804
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.923
- F1: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8643 | 1.0 | 250 | 0.3395 | 0.901 | 0.8969 |
| 0.2615 | 2.0 | 500 | 0.2251 | 0.923 | 0.9230 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dapang/distilroberta-base-mic-nlp | bb2d8bfe421602b80302f349c6ed662c8c4acfa5 | 2022-04-23T04:10:09.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dapang | null | dapang/distilroberta-base-mic-nlp | 6 | null | transformers | 15,603 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mic-nlp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mic-nlp
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0049
- Accuracy: 0.9993
- F1: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.740146306575944e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 188 | 0.0027 | 0.9997 | 0.9997 |
| No log | 2.0 | 376 | 0.0049 | 0.9993 | 0.9993 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0.dev20220422+cu116
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dapang/distilroberta-base-etc-sym | 4aa2e08484ae6d9732a0313ec6764513adf3837e | 2022-04-23T04:26:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dapang | null | dapang/distilroberta-base-etc-sym | 6 | null | transformers | 15,604 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-etc-sym
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-etc-sym
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 0.9997
- F1: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.740146306575944e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 262 | 0.0068 | 0.9987 | 0.9987 |
| No log | 2.0 | 524 | 0.0005 | 0.9997 | 0.9997 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0.dev20220422+cu116
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dapang/distilroberta-base-mrl | 4da0ba455038852e4a9df726880ba18472e4d975 | 2022-05-03T09:27:53.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dapang | null | dapang/distilroberta-base-mrl | 6 | null | transformers | 15,605 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrl
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0170
- Accuracy: 0.9967
- F1: 0.9967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.1821851463909416e-05
- train_batch_size: 400
- eval_batch_size: 400
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.0265 | 0.9946 | 0.9946 |
| No log | 2.0 | 96 | 0.0180 | 0.9962 | 0.9962 |
| No log | 3.0 | 144 | 0.0163 | 0.9962 | 0.9962 |
| No log | 4.0 | 192 | 0.0194 | 0.9946 | 0.9946 |
| No log | 5.0 | 240 | 0.0193 | 0.9942 | 0.9942 |
| No log | 6.0 | 288 | 0.0172 | 0.9967 | 0.9967 |
| No log | 7.0 | 336 | 0.0206 | 0.9954 | 0.9954 |
| No log | 8.0 | 384 | 0.0183 | 0.9962 | 0.9962 |
| No log | 9.0 | 432 | 0.0170 | 0.9967 | 0.9967 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dapang/distilroberta-base-etc | 9db6449131e3b312f1314ceabdc0dbe1d2a93e29 | 2022-05-03T09:50:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dapang | null | dapang/distilroberta-base-etc | 6 | null | transformers | 15,606 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-etc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-etc
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3382
- Accuracy: 0.919
- F1: 0.9190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.969790133269121e-05
- train_batch_size: 400
- eval_batch_size: 400
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 84 | 0.2372 | 0.907 | 0.9070 |
| No log | 2.0 | 168 | 0.2358 | 0.9083 | 0.9083 |
| No log | 3.0 | 252 | 0.2430 | 0.9137 | 0.9137 |
| No log | 4.0 | 336 | 0.2449 | 0.919 | 0.9190 |
| No log | 5.0 | 420 | 0.2884 | 0.9193 | 0.9193 |
| No log | 6.0 | 504 | 0.3179 | 0.9167 | 0.9167 |
| No log | 7.0 | 588 | 0.3382 | 0.919 | 0.9190 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dapang/distilroberta-base-mic | 719f9c5019d4fb042b929730a25d6dcdb283117f | 2022-05-03T09:12:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dapang | null | dapang/distilroberta-base-mic | 6 | null | transformers | 15,607 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mic
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3435
- Accuracy: 0.9104
- F1: 0.9103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.748413056668156e-05
- train_batch_size: 200
- eval_batch_size: 200
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 120 | 0.2830 | 0.8804 | 0.8797 |
| No log | 2.0 | 240 | 0.2398 | 0.9046 | 0.9046 |
| No log | 3.0 | 360 | 0.3474 | 0.8959 | 0.8954 |
| No log | 4.0 | 480 | 0.3435 | 0.9104 | 0.9103 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dmjimenezbravo/electricidad-small-discriminator-finetuned-usElectionTweets1Jul11Nov-spanish | 2849df754dd383db71dd7d93b424ce6cc1c9ab69 | 2022-04-26T11:00:58.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | dmjimenezbravo | null | dmjimenezbravo/electricidad-small-discriminator-finetuned-usElectionTweets1Jul11Nov-spanish | 6 | null | transformers | 15,608 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electricidad-small-discriminator-finetuned-usElectionTweets1Jul11Nov-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-discriminator-finetuned-usElectionTweets1Jul11Nov-spanish
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3327
- Accuracy: 0.7642
- F1: 0.7642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.88 | 1.0 | 1222 | 0.7491 | 0.6943 | 0.6943 |
| 0.7292 | 2.0 | 2444 | 0.6253 | 0.7544 | 0.7544 |
| 0.6346 | 3.0 | 3666 | 0.5292 | 0.7971 | 0.7971 |
| 0.565 | 4.0 | 4888 | 0.4831 | 0.8168 | 0.8168 |
| 0.4898 | 5.0 | 6110 | 0.4086 | 0.8532 | 0.8532 |
| 0.4375 | 6.0 | 7332 | 0.3411 | 0.8831 | 0.8831 |
| 0.3968 | 7.0 | 8554 | 0.2735 | 0.9100 | 0.9100 |
| 0.3321 | 8.0 | 9776 | 0.2343 | 0.9253 | 0.9253 |
| 0.3045 | 9.0 | 10998 | 0.1855 | 0.9450 | 0.9450 |
| 0.2837 | 10.0 | 12220 | 0.1539 | 0.9591 | 0.9591 |
| 0.2411 | 11.0 | 13442 | 0.1309 | 0.9650 | 0.9650 |
| 0.2203 | 12.0 | 14664 | 0.1100 | 0.9716 | 0.9716 |
| 0.1953 | 13.0 | 15886 | 0.1067 | 0.9760 | 0.9760 |
| 0.1836 | 14.0 | 17108 | 0.0755 | 0.9813 | 0.9813 |
| 0.1611 | 15.0 | 18330 | 0.0731 | 0.9829 | 0.9829 |
| 0.1479 | 16.0 | 19552 | 0.0746 | 0.9839 | 0.9839 |
| 0.138 | 17.0 | 20774 | 0.0516 | 0.9895 | 0.9895 |
| 0.129 | 18.0 | 21996 | 0.0481 | 0.9903 | 0.9903 |
| 0.1182 | 19.0 | 23218 | 0.0401 | 0.9926 | 0.9926 |
| 0.1065 | 20.0 | 24440 | 0.0488 | 0.9895 | 0.9895 |
| 0.096 | 21.0 | 25662 | 0.0333 | 0.9928 | 0.9928 |
| 0.0889 | 22.0 | 26884 | 0.0222 | 0.9951 | 0.9951 |
| 0.0743 | 23.0 | 28106 | 0.0236 | 0.9951 | 0.9951 |
| 0.0821 | 24.0 | 29328 | 0.0322 | 0.9931 | 0.9931 |
| 0.0866 | 25.0 | 30550 | 0.0135 | 0.9974 | 0.9974 |
| 0.0616 | 26.0 | 31772 | 0.0100 | 0.9980 | 0.9980 |
| 0.0641 | 27.0 | 32994 | 0.0112 | 0.9977 | 0.9977 |
| 0.0603 | 28.0 | 34216 | 0.0071 | 0.9987 | 0.9987 |
| 0.0491 | 29.0 | 35438 | 0.0088 | 0.9982 | 0.9982 |
| 0.0563 | 30.0 | 36660 | 0.0071 | 0.9982 | 0.9982 |
| 0.0467 | 31.0 | 37882 | 0.0045 | 0.9990 | 0.9990 |
| 0.0545 | 32.0 | 39104 | 0.0057 | 0.9987 | 0.9987 |
| 0.0519 | 33.0 | 40326 | 0.0048 | 0.9992 | 0.9992 |
| 0.0524 | 34.0 | 41548 | 0.0030 | 0.9995 | 0.9995 |
| 0.044 | 35.0 | 42770 | 0.0046 | 0.9990 | 0.9990 |
| 0.0442 | 36.0 | 43992 | 0.0029 | 0.9995 | 0.9995 |
| 0.0352 | 37.0 | 45214 | 0.0035 | 0.9995 | 0.9995 |
| 0.0348 | 38.0 | 46436 | 0.0029 | 0.9995 | 0.9995 |
| 0.0295 | 39.0 | 47658 | 0.0023 | 0.9995 | 0.9995 |
| 0.0289 | 40.0 | 48880 | 0.0035 | 0.9995 | 0.9995 |
| 0.0292 | 41.0 | 50102 | 0.0023 | 0.9995 | 0.9995 |
| 0.0259 | 42.0 | 51324 | 0.0027 | 0.9995 | 0.9995 |
| 0.0217 | 43.0 | 52546 | 0.0031 | 0.9995 | 0.9995 |
| 0.0278 | 44.0 | 53768 | 0.0018 | 0.9995 | 0.9995 |
| 0.0254 | 45.0 | 54990 | 0.0023 | 0.9995 | 0.9995 |
| 0.0164 | 46.0 | 56212 | 0.0016 | 0.9997 | 0.9997 |
| 0.0277 | 47.0 | 57434 | 0.0027 | 0.9997 | 0.9997 |
| 0.0158 | 48.0 | 58656 | 0.0029 | 0.9997 | 0.9997 |
| 0.0178 | 49.0 | 59878 | 0.0023 | 0.9997 | 0.9997 |
| 0.022 | 50.0 | 61100 | 0.0019 | 0.9997 | 0.9997 |
| 0.0167 | 51.0 | 62322 | 0.0018 | 0.9997 | 0.9997 |
| 0.0159 | 52.0 | 63544 | 0.0017 | 0.9997 | 0.9997 |
| 0.0105 | 53.0 | 64766 | 0.0016 | 0.9997 | 0.9997 |
| 0.0111 | 54.0 | 65988 | 0.0015 | 0.9997 | 0.9997 |
| 0.0139 | 55.0 | 67210 | 0.0021 | 0.9997 | 0.9997 |
| 0.0152 | 56.0 | 68432 | 0.0026 | 0.9997 | 0.9997 |
| 0.0191 | 57.0 | 69654 | 0.0022 | 0.9997 | 0.9997 |
| 0.0075 | 58.0 | 70876 | 0.0017 | 0.9997 | 0.9997 |
| 0.0141 | 59.0 | 72098 | 0.0016 | 0.9997 | 0.9997 |
| 0.0086 | 60.0 | 73320 | 0.0014 | 0.9997 | 0.9997 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
shiyue/wav2vec2-large-xlsr-53-chr-phonetic-with-private-data | 173ec1909defdb8664cfc7da63f56e1bf14721a0 | 2022-04-24T17:47:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | shiyue | null | shiyue/wav2vec2-large-xlsr-53-chr-phonetic-with-private-data | 6 | null | transformers | 15,609 | Entry not found |
accelotron/xlm-roberta-finetune-muserc | 5ecf874e6248ddc059263dce422e20f12cbe0f25 | 2022-04-25T10:04:49.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | accelotron | null | accelotron/xlm-roberta-finetune-muserc | 6 | null | transformers | 15,610 | xlm-RoBERTa-base fine-tuned for MuSeRC task. |
vblagoje/greaselm-csqa | e8b5ff51c1fd3587350364dc4c76b63f5e2f3dfe | 2022-05-15T14:02:12.000Z | [
"pytorch",
"greaselm",
"transformers"
]
| null | false | vblagoje | null | vblagoje/greaselm-csqa | 6 | null | transformers | 15,611 | |
Tristo/sociopath | e8fe782dc056842a3c75d184d522ccad313cd65d | 2022-04-25T14:10:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:cc"
]
| text-generation | false | Tristo | null | Tristo/sociopath | 6 | null | transformers | 15,612 | ---
license: cc
---
|
apkbala107/myowntamilelectraposmodel | ba074e53caef37584d0770bdcef128d84ad3e5e9 | 2022-04-25T16:32:53.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"license:cc",
"autotrain_compatible"
]
| token-classification | false | apkbala107 | null | apkbala107/myowntamilelectraposmodel | 6 | null | transformers | 15,613 | ---
license: cc
---
|
excalibur/distilbert-base-uncased-finetuned-emotion | c560ab25e5f7f51cf40066e2aca60bb5dad399cd | 2022-04-26T05:27:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | excalibur | null | excalibur/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,614 | Entry not found |
Real29/my-model-ptc | 1cbeef33438814e51bb8fb117151b97a4514fd13 | 2022-04-26T11:24:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Real29 | null | Real29/my-model-ptc | 6 | null | transformers | 15,615 | Entry not found |
spuun/kekbot-beta-3-medium | 1fb0ec761f5480cfa5a1dfe74d3c62ade8b7f9b8 | 2022-04-26T22:15:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational",
"license:cc-by-nc-sa-4.0",
"co2_eq_emissions"
]
| conversational | false | spuun | null | spuun/kekbot-beta-3-medium | 6 | null | transformers | 15,616 | ---
language:
- en
tags:
- conversational
co2_eq_emissions:
emissions: "660"
source: "mlco2.github.io"
training_type: "fine-tuning"
geographical_location: "West Java, Indonesia"
hardware_used: "1 Tesla P100"
license: cc-by-nc-sa-4.0
widget:
- text: "Hey kekbot! What's up?"
example_title: "Asking what's up"
- text: "Hey kekbot! How r u?"
example_title: "Asking how he is"
---
> THIS MODEL IS IN PUBLIC BETA, PLEASE DO NOT EXPECT ANY FORM OF STABILITY IN ITS CURRENT STATE.
# Art Union server chatbot
Based on a DialoGPT-medium model, fine-tuned to a select subset (65k<= messages) of Art Union's general-chat channel chat history.
### Current issues
(Which hopefully will be fixed in future iterations) Include, but not limited to:
- Limited turns, after ~17 turns output may break for no apparent reason.
- Inconsistent variance, acts like an overfitted model from time to time for no reason whatsoever.
|
caush/Clickbait1 | 5844c6edb677c09b26a9959510b71b8c65f53c56 | 2022-05-02T20:36:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | caush | null | caush/Clickbait1 | 6 | null | transformers | 15,617 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Clickbait1
results: []
---
# Clickbait1
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the [Webis-Clickbait-17](https://zenodo.org/record/5530410) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0257
## Model description
MiniLM is a distilled model from the paper "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers".
We fine tune this model to evaluate (regression) the clickbait level of title news.
## Intended uses & limitations
Model looks like the model described in the paper [Predicting Clickbait Strength in Online Social Media](https://aclanthology.org/2020.coling-main.425/) by Indurthi Vijayasaradhi, Syed Bakhtiyar, Gupta Manish, Varma Vasudeva.
The model was trained with english titles.
## Training and evaluation data
We trained the model with the official training data for the chalenge (clickbait17-train-170630.zip (894 MiB, 19538 posts), plus another set that was just available after the end of the challenge (clickbait17-train-170331.zip (157 MiB, 2459 posts).
## Training procedure
Code can be find in [Github](https://github.com/caush/Clickbait).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.05 | 50 | 0.0571 |
| No log | 0.09 | 100 | 0.0448 |
| No log | 0.14 | 150 | 0.0391 |
| No log | 0.18 | 200 | 0.0326 |
| No log | 0.23 | 250 | 0.0343 |
| No log | 0.27 | 300 | 0.0343 |
| No log | 0.32 | 350 | 0.0343 |
| No log | 0.36 | 400 | 0.0346 |
| No log | 0.41 | 450 | 0.0343 |
| 0.0388 | 0.46 | 500 | 0.0297 |
| 0.0388 | 0.5 | 550 | 0.0293 |
| 0.0388 | 0.55 | 600 | 0.0301 |
| 0.0388 | 0.59 | 650 | 0.0290 |
| 0.0388 | 0.64 | 700 | 0.0326 |
| 0.0388 | 0.68 | 750 | 0.0285 |
| 0.0388 | 0.73 | 800 | 0.0285 |
| 0.0388 | 0.77 | 850 | 0.0275 |
| 0.0388 | 0.82 | 900 | 0.0314 |
| 0.0388 | 0.87 | 950 | 0.0309 |
| 0.0297 | 0.91 | 1000 | 0.0277 |
| 0.0297 | 0.96 | 1050 | 0.0281 |
| 0.0297 | 1.0 | 1100 | 0.0273 |
| 0.0297 | 1.05 | 1150 | 0.0270 |
| 0.0297 | 1.09 | 1200 | 0.0291 |
| 0.0297 | 1.14 | 1250 | 0.0293 |
| 0.0297 | 1.18 | 1300 | 0.0269 |
| 0.0297 | 1.23 | 1350 | 0.0276 |
| 0.0297 | 1.28 | 1400 | 0.0279 |
| 0.0297 | 1.32 | 1450 | 0.0267 |
| 0.0265 | 1.37 | 1500 | 0.0270 |
| 0.0265 | 1.41 | 1550 | 0.0300 |
| 0.0265 | 1.46 | 1600 | 0.0274 |
| 0.0265 | 1.5 | 1650 | 0.0274 |
| 0.0265 | 1.55 | 1700 | 0.0266 |
| 0.0265 | 1.59 | 1750 | 0.0267 |
| 0.0265 | 1.64 | 1800 | 0.0267 |
| 0.0265 | 1.68 | 1850 | 0.0280 |
| 0.0265 | 1.73 | 1900 | 0.0274 |
| 0.0265 | 1.78 | 1950 | 0.0272 |
| 0.025 | 1.82 | 2000 | 0.0261 |
| 0.025 | 1.87 | 2050 | 0.0268 |
| 0.025 | 1.91 | 2100 | 0.0268 |
| 0.025 | 1.96 | 2150 | 0.0259 |
| 0.025 | 2.0 | 2200 | 0.0257 |
| 0.025 | 2.05 | 2250 | 0.0260 |
| 0.025 | 2.09 | 2300 | 0.0263 |
| 0.025 | 2.14 | 2350 | 0.0262 |
| 0.025 | 2.19 | 2400 | 0.0269 |
| 0.025 | 2.23 | 2450 | 0.0262 |
| 0.0223 | 2.28 | 2500 | 0.0262 |
| 0.0223 | 2.32 | 2550 | 0.0267 |
| 0.0223 | 2.37 | 2600 | 0.0260 |
| 0.0223 | 2.41 | 2650 | 0.0260 |
| 0.0223 | 2.46 | 2700 | 0.0259 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Hate-speech-CNERG/urdu-codemixed-abusive-MuRIL | e37363de5b0107bd70e72b1181c9fc2a98f992ef | 2022-05-03T06:05:42.000Z | [
"pytorch",
"bert",
"text-classification",
"ur-en",
"arxiv:2204.12543",
"transformers",
"license:afl-3.0"
]
| text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/urdu-codemixed-abusive-MuRIL | 6 | null | transformers | 15,618 | ---
language: ur-en
license: afl-3.0
---
This model is used detecting **abusive speech** in **Code-Mixed Urdu**. It is finetuned on MuRIL model using code-mixed Urdu abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ |
jyotsana/distilbert-base-uncased-finetuned-cola | 34594b2422aaf9164409031791a1c4612381379b | 2022-05-12T17:24:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | jyotsana | null | jyotsana/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 15,619 | Entry not found |
ridvan9/autotrain-rdv-senti-analys-v2-791424369 | 052cb8e6e6f6ac4d95ed0f9591bdf5c96d0a7a22 | 2022-04-27T08:23:36.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"dataset:ridvan9/autotrain-data-rdv-senti-analys-v2",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | ridvan9 | null | ridvan9/autotrain-rdv-senti-analys-v2-791424369 | 6 | null | transformers | 15,620 | ---
tags: autotrain
language: tr
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ridvan9/autotrain-data-rdv-senti-analys-v2
co2_eq_emissions: 4.95702490470204
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 791424369
- CO2 Emissions (in grams): 4.95702490470204
## Validation Metrics
- Loss: 0.21698406338691711
- Accuracy: 0.9337298215802888
- Macro F1: 0.9339734231139484
- Micro F1: 0.9337298215802888
- Weighted F1: 0.9340497563679602
- Macro Precision: 0.934733314676483
- Micro Precision: 0.9337298215802888
- Weighted Precision: 0.9348373701161897
- Macro Recall: 0.9336931241452828
- Micro Recall: 0.9337298215802888
- Weighted Recall: 0.9337298215802888
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ridvan9/autotrain-rdv-senti-analys-v2-791424369
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ridvan9/autotrain-rdv-senti-analys-v2-791424369", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ridvan9/autotrain-rdv-senti-analys-v2-791424369", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
EAST/autotrain-Rule-793324440 | f42cd2adf6de9f9d4a694bf0ba927a511ff75a5a | 2022-04-27T14:57:26.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:EAST/autotrain-data-Rule",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | EAST | null | EAST/autotrain-Rule-793324440 | 6 | null | transformers | 15,621 | ---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- EAST/autotrain-data-Rule
co2_eq_emissions: 0.0025078722090032795
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 793324440
- CO2 Emissions (in grams): 0.0025078722090032795
## Validation Metrics
- Loss: 0.31105440855026245
- Accuracy: 0.9473684210526315
- Precision: 0.9
- Recall: 1.0
- AUC: 0.9444444444444445
- F1: 0.9473684210526316
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/EAST/autotrain-Rule-793324440
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("EAST/autotrain-Rule-793324440", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("EAST/autotrain-Rule-793324440", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
anton-l/xtreme_s_xlsr_300m_fleurs_asr_en_us | 451446d444b5c5661de91dbaa03cbdfc9151703e | 2022-04-28T12:39:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en_us",
"dataset:google/xtreme_s",
"transformers",
"fleurs-asr",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anton-l | null | anton-l/xtreme_s_xlsr_300m_fleurs_asr_en_us | 6 | null | transformers | 15,622 | ---
language:
- en_us
license: apache-2.0
tags:
- fleurs-asr
- google/xtreme_s
- generated_from_trainer
datasets:
- google/xtreme_s
model-index:
- name: xtreme_s_xlsr_300m_fleurs_asr_en_us
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_fleurs_asr_en_us
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - FLEURS.EN_US dataset.
It achieves the following results on the evaluation set:
- Cer: 0.1356
- Loss: 0.5599
- Wer: 0.3148
- Predict Samples: 647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 2.8769 | 5.0 | 200 | 2.8871 | 1.0 | 0.9878 |
| 0.2458 | 10.0 | 400 | 0.5570 | 0.4899 | 0.1951 |
| 0.0762 | 15.0 | 600 | 0.5213 | 0.3727 | 0.1562 |
| 0.0334 | 20.0 | 800 | 0.5742 | 0.3666 | 0.1543 |
| 0.0244 | 25.0 | 1000 | 0.5907 | 0.3546 | 0.1499 |
| 0.0143 | 30.0 | 1200 | 0.5961 | 0.3460 | 0.1469 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
UT/PARSBRT | 53835fc035514ff5dbfb6f8ae2f680c3c2b946d7 | 2022-04-29T10:58:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | UT | null | UT/PARSBRT | 6 | null | transformers | 15,623 | Entry not found |
Andrei0086/Chat-small-bot | 8120c8a72927cd2936878b355ea82f75483b6f70 | 2022-04-28T20:04:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Andrei0086 | null | Andrei0086/Chat-small-bot | 6 | null | transformers | 15,624 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Ansh/my_bert | 1033df62daf9f9d3e064b86543f584e92e568bf0 | 2022-05-04T16:50:42.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:afl-3.0"
]
| text-classification | false | Ansh | null | Ansh/my_bert | 6 | null | transformers | 15,625 | ---
license: afl-3.0
---
|
doc2query/msmarco-indonesian-mt5-base-v1 | 8f56ac36b37ea0e46f055adfbe9bf20b0117ed7b | 2022-04-29T11:58:59.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"id",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | doc2query | null | doc2query/msmarco-indonesian-mt5-base-v1 | 6 | 1 | transformers | 15,626 | ---
language: id
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python adalah bahasa pemrograman tujuan umum yang ditafsirkan, tingkat tinggi. Dibuat oleh Guido van Rossum dan pertama kali dirilis pada tahun 1991, filosofi desain Python menekankan keterbacaan kode dengan penggunaan spasi putih yang signifikan. Konstruksi bahasanya dan pendekatan berorientasi objek bertujuan untuk membantu pemrogram menulis kode yang jelas dan logis untuk proyek skala kecil dan besar."
license: apache-2.0
---
# doc2query/msmarco-indonesian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-indonesian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python adalah bahasa pemrograman tujuan umum yang ditafsirkan, tingkat tinggi. Dibuat oleh Guido van Rossum dan pertama kali dirilis pada tahun 1991, filosofi desain Python menekankan keterbacaan kode dengan penggunaan spasi putih yang signifikan. Konstruksi bahasanya dan pendekatan berorientasi objek bertujuan untuk membantu pemrogram menulis kode yang jelas dan logis untuk proyek skala kecil dan besar."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
UT/MULTIBRT_DEBIAS | 10cc1a2cfae86d9914015386037fb674a570b218 | 2022-04-29T16:42:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | UT | null | UT/MULTIBRT_DEBIAS | 6 | null | transformers | 15,627 | Entry not found |
sjchoure/distilbert-base-uncased-finetuned-squad | 1a859018fd49a3466c9b52093678bf85e2124436 | 2022-04-30T21:02:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | sjchoure | null | sjchoure/distilbert-base-uncased-finetuned-squad | 6 | null | transformers | 15,628 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 54 | 3.3597 |
| No log | 2.0 | 108 | 2.9797 |
| No log | 3.0 | 162 | 2.9362 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
carlosaguayo/features_and_usecases | b47daa6541c5c24ff48742095665f9ef059126d9 | 2022-05-01T02:52:59.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | carlosaguayo | null | carlosaguayo/features_and_usecases | 6 | null | sentence-transformers | 15,629 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# carlosaguayo/features_and_usecases
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('carlosaguayo/features_and_usecases')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=carlosaguayo/features_and_usecases)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 175 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
charlieoneill/distilbert-base-uncased-finetuned-tweet_eval-offensive | 0a89da8b7ca05ce0184665a0ccce7dfa148aa5e8 | 2022-05-01T03:36:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | charlieoneill | null | charlieoneill/distilbert-base-uncased-finetuned-tweet_eval-offensive | 6 | null | transformers | 15,630 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-tweet_eval-offensive
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: offensive
metrics:
- name: Accuracy
type: accuracy
value: 0.8089123867069486
- name: F1
type: f1
value: 0.8060281168230459
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-tweet_eval-offensive
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4185
- Accuracy: 0.8089
- F1: 0.8060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 187 | 0.4259 | 0.8059 | 0.7975 |
| 0.46 | 2.0 | 374 | 0.4185 | 0.8089 | 0.8060 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Raffay/wav2vec2-urdu-asr-project | c9c94dd35369736b6c0001f025de4d356f8c2386 | 2022-05-02T16:33:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | Raffay | null | Raffay/wav2vec2-urdu-asr-project | 6 | null | transformers | 15,631 | Entry not found |
Gergoe/mt5-small-finetuned-amazon-en-es | 4b975fe9326de805d44eba76488ceae97c0c941d | 2022-05-16T22:42:55.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | Gergoe | null | Gergoe/mt5-small-finetuned-amazon-en-es | 6 | 1 | transformers | 15,632 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2891
- Rouge1: 15.35
- Rouge2: 6.4925
- Rougel: 14.8921
- Rougelsum: 14.6312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.0622 | 1.0 | 1276 | 3.5617 | 13.2417 | 4.8928 | 12.8258 | 12.8078 |
| 4.0768 | 2.0 | 2552 | 3.4329 | 14.5681 | 6.4922 | 14.0621 | 13.9709 |
| 3.7736 | 3.0 | 3828 | 3.3393 | 15.1942 | 6.5262 | 14.7138 | 14.6049 |
| 3.5951 | 4.0 | 5104 | 3.3122 | 14.8813 | 6.2962 | 14.507 | 14.3477 |
| 3.477 | 5.0 | 6380 | 3.2991 | 15.0992 | 6.3888 | 14.8397 | 14.5606 |
| 3.4084 | 6.0 | 7656 | 3.3035 | 15.1897 | 6.2292 | 14.6686 | 14.4488 |
| 3.3661 | 7.0 | 8932 | 3.2959 | 15.3489 | 6.5702 | 14.9211 | 14.701 |
| 3.3457 | 8.0 | 10208 | 3.2891 | 15.35 | 6.4925 | 14.8921 | 14.6312 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.7.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
DioLiu/distilbert-base-uncased-finetuned-sst2 | 753d423957ec38eda9f2186830d89ab160af3da8 | 2022-05-02T03:06:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | DioLiu | null | DioLiu/distilbert-base-uncased-finetuned-sst2 | 6 | null | transformers | 15,633 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8967889908256881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5963
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.247 | 1.0 | 1404 | 0.3629 | 0.8865 |
| 0.1532 | 2.0 | 2808 | 0.3945 | 0.8979 |
| 0.0981 | 3.0 | 4212 | 0.4206 | 0.9025 |
| 0.0468 | 4.0 | 5616 | 0.5358 | 0.9014 |
| 0.0313 | 5.0 | 7020 | 0.5963 | 0.8968 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jhoonk/distilbert-base-uncased-finetuned-squad | 3c46b5d5d79d89935e9202379a2c8011b67506e6 | 2022-05-10T00:07:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | jhoonk | null | jhoonk/distilbert-base-uncased-finetuned-squad | 6 | null | transformers | 15,634 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2107 | 1.0 | 5533 | 1.1478 |
| 0.949 | 2.0 | 11066 | 1.1191 |
| 0.7396 | 3.0 | 16599 | 1.1622 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
patrickquick/BERTicelli | 458e2097f7d9c1b24aaacf66ca45e13d17b11bed | 2022-05-10T09:03:48.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:OLID",
"transformers",
"BERTicelli",
"text classification",
"abusive language",
"hate speech",
"offensive language",
"license:apache-2.0"
]
| text-classification | false | patrickquick | null | patrickquick/BERTicelli | 6 | null | transformers | 15,635 | ---
language:
- en
tags:
- BERTicelli
- text classification
- abusive language
- hate speech
- offensive language
datasets:
- OLID
license: apache-2.0
widget:
- text: "If Jamie Oliver fucks with my £3 meal deals at Tesco I'll kill the cunt."
example_title: "Example 1"
- text: "Keep up the good hard work."
example_title: "Example 2"
- text: "That's not hair. Those were polyester fibers because Yoda is (or was) a puppet."
example_title: "Example 3"
---
[Mona Allaert](https://github.com/MonaDT) •
[Leonardo Grotti](https://github.com/corvusMidnight) •
[Patrick Quick](https://github.com/patrickquick)
## Model description
BERTicelli is an English pre-trained BERT model obtained by fine-tuning the [English BERT base cased model](https://github.com/google-research/bert) with the training data from [Offensive Language Identification Dataset (OLID)](https://scholar.harvard.edu/malmasi/olid).
This model was developed for the NLP Shared Task in the Digital Text Analysis program at the University of Antwerp (2021–2022). |
dragonSwing/viwav2vec2-base-1.5k | 1e2277b975cb8c9e248bbbf9e3235175851dd37c | 2022-05-17T15:14:37.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"vi",
"arxiv:2006.11477",
"transformers",
"speech",
"automatic-speech-recognition",
"license:cc-by-sa-4.0"
]
| automatic-speech-recognition | false | dragonSwing | null | dragonSwing/viwav2vec2-base-1.5k | 6 | null | transformers | 15,636 | ---
license: cc-by-sa-4.0
language: vi
tags:
- speech
- automatic-speech-recognition
---
# Wav2Vec2 base model trained of 1.5K hours of Vietnamese speech
The base model is pre-trained on 16kHz sampled speech audio from Vietnamese speech corpus containing 1.5K hours of reading and broadcasting speech. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Facebook's Wav2Vec2 blog](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
[Paper](https://arxiv.org/abs/2006.11477)
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the English pre-trained model.
```python
import torch
from transformers import Wav2Vec2Model
model = Wav2Vec2Model.from_pretrained("dragonSwing/viwav2vec2-base-1.5k")
# Sanity check
inputs = torch.rand([1, 16000])
outputs = model(inputs)
``` |
veronica320/MPE_roberta | d9c3d19bb9ab97e64e529d9a6b5a49fc53980dfd | 2022-05-03T02:21:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | veronica320 | null | veronica320/MPE_roberta | 6 | null | transformers | 15,637 | Entry not found |
enimai/mbart-large-50-paraphrase-finetuned-for-ru | 59d9ab04e47e86e3c7af80f969aa18272a446aa3 | 2022-05-03T17:50:48.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | enimai | null | enimai/mbart-large-50-paraphrase-finetuned-for-ru | 6 | null | transformers | 15,638 | ---
license: apache-2.0
---
|
Lauler/motions-classifier | 67f8e6b9790abe7c3ba53a8f8e53ff8da2eb94e8 | 2022-05-03T23:08:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Lauler | null | Lauler/motions-classifier | 6 | null | transformers | 15,639 | ## Swedish parliamentary motions party classifier
A model trained on Swedish parliamentary motions from 2018 to 2021. Outputs the probabilities for different parties being the originator of a given text. |
jenspt/bert_regression_basic_16_batch_size | 2e22f9592657c5a1c33a96022d4c174832b7b522 | 2022-05-11T05:52:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | jenspt | null | jenspt/bert_regression_basic_16_batch_size | 6 | null | transformers | 15,640 | Entry not found |
laituan245/t5-v1_1-large-caption2smiles | 24a4d20ca90c7b649337b34baa12576f6290c918 | 2022-05-05T00:46:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | laituan245 | null | laituan245/t5-v1_1-large-caption2smiles | 6 | null | transformers | 15,641 | Entry not found |
himanshusrtekbox/distilbert-base-uncased-finetuned-emotion | 15f0a80848ebd9dfca6eb99b21ba8244e595425f | 2022-05-24T11:47:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | himanshusrtekbox | null | himanshusrtekbox/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,642 | Entry not found |
YeRyeongLee/bert-base-uncased-finetuned-0505-2 | 8d3a0c17aac32adc150cb06d65daf4e095a0a9af | 2022-05-05T06:29:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | YeRyeongLee | null | YeRyeongLee/bert-base-uncased-finetuned-0505-2 | 6 | null | transformers | 15,643 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-0505-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-0505-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4277
- Accuracy: 0.9206
- F1: 0.9205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 1373 | 0.3634 | 0.9025 | 0.9012 |
| No log | 2.0 | 2746 | 0.3648 | 0.9066 | 0.9060 |
| No log | 3.0 | 4119 | 0.3978 | 0.9189 | 0.9183 |
| No log | 4.0 | 5492 | 0.4277 | 0.9206 | 0.9205 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
benjamin/gpt2-wechsel-malagasy | 75bdd961d5659ed962b71808252d057d11af0dc4 | 2022-07-13T23:45:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"mg",
"transformers",
"license:mit"
]
| text-generation | false | benjamin | null | benjamin/gpt2-wechsel-malagasy | 6 | null | transformers | 15,644 | ---
language: mg
license: mit
---
# gpt2-wechsel-malagasy
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
| Model | PPL |
|---|---|
| `gpt2-wechsel-sundanese` | **111.72** |
| `gpt2` (retrained from scratch) | 149.46 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-scottish-gaelic` | **16.43** |
| `gpt2` (retrained from scratch) | 19.53 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-uyghur` | **34.33** |
| `gpt2` (retrained from scratch) | 42.82 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-malagasy` | **14.01** |
| `gpt2` (retrained from scratch) | 15.93 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
CaptAdorable/RickBot | 8d526e009ea4d34836d4cbdc5f67b3ec06dd9fcf | 2022-05-05T20:40:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | CaptAdorable | null | CaptAdorable/RickBot | 6 | null | transformers | 15,645 | ---
tags:
- conversational
---
# RickBot built for [Chai](https://chai.ml/)
Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
|
Milanmg/xlm-roberta-base | 1ce62b6fc3e3c9abe9b8843580a0406e9963005f | 2022-05-14T03:16:45.000Z | [
"pytorch",
"jax",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Milanmg | null | Milanmg/xlm-roberta-base | 6 | null | transformers | 15,646 | Entry not found |
avuhong/protBERTbfd_AAV2_classification | e59790ea300611518324c1709597e16673c4f059 | 2022-05-07T16:31:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | avuhong | null | avuhong/protBERTbfd_AAV2_classification | 6 | null | transformers | 15,647 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: protBERTbfd_AAV2_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# protBERTbfd_AAV2_classification
This model is a fine-tuned version of [Rostlab/prot_bert_bfd](https://huggingface.co/Rostlab/prot_bert_bfd) on AAV2 dataset with ~230k sequences (Bryant et al 2020).
The WT sequence (aa561-588): D E E E I R T T N P V A T E Q Y G S V S T N L Q R G N R
Maximum length: 50
It achieves the following results on the evaluation set. Note:this is result of the last epoch, I think the pushed model is loaded with best checkpoint - best val_loss, I'm not so sure though :/
- Loss: 0.1341
- Accuracy: 0.9615
- F1: 0.9627
- Precision: 0.9637
- Recall: 0.9618
- Auroc: 0.9615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Auroc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| No log | 1.0 | 116 | 0.2582 | 0.9064 | 0.9157 | 0.8564 | 0.9839 | 0.9038 |
| No log | 2.0 | 232 | 0.1447 | 0.9424 | 0.9432 | 0.9618 | 0.9252 | 0.9430 |
| No log | 3.0 | 348 | 0.1182 | 0.9542 | 0.9556 | 0.9573 | 0.9539 | 0.9542 |
| No log | 4.0 | 464 | 0.1129 | 0.9585 | 0.9602 | 0.9520 | 0.9685 | 0.9581 |
| 0.2162 | 5.0 | 580 | 0.1278 | 0.9553 | 0.9558 | 0.9776 | 0.9351 | 0.9561 |
| 0.2162 | 6.0 | 696 | 0.1139 | 0.9587 | 0.9607 | 0.9465 | 0.9752 | 0.9581 |
| 0.2162 | 7.0 | 812 | 0.1127 | 0.9620 | 0.9633 | 0.9614 | 0.9652 | 0.9619 |
| 0.2162 | 8.0 | 928 | 0.1341 | 0.9615 | 0.9627 | 0.9637 | 0.9618 | 0.9615 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huxxx657/bart-base-finetuned-squad | e544da6341f2fd6ee3874a90862b6a1771b38e45 | 2022-05-07T23:42:53.000Z | [
"pytorch",
"tensorboard",
"bart",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | huxxx657 | null | huxxx657/bart-base-finetuned-squad | 6 | null | transformers | 15,648 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-squad
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4988 | 0.2 | 1108 | 1.2399 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
deepgai/finetuned-tweet_eval-sentiment | 9df75ba85ac9f0aa3c3660c9535c976c10c854a8 | 2022-05-08T15:28:39.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers"
]
| text-classification | false | deepgai | null | deepgai/finetuned-tweet_eval-sentiment | 6 | null | transformers | 15,649 | |
nikuznetsov/roberta-base-finetuned-cola | dd52d7cb103b123742ed154245fb23b18ef48646 | 2022-05-08T21:02:05.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | nikuznetsov | null | nikuznetsov/roberta-base-finetuned-cola | 6 | null | transformers | 15,650 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: roberta-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5880199146512337
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7832
- Matthews Correlation: 0.5880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5027 | 1.0 | 535 | 0.6017 | 0.4369 |
| 0.33 | 2.0 | 1070 | 0.5066 | 0.5521 |
| 0.2311 | 3.0 | 1605 | 0.6269 | 0.5727 |
| 0.1767 | 4.0 | 2140 | 0.7832 | 0.5880 |
| 0.1337 | 5.0 | 2675 | 0.9164 | 0.5880 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ankurani/roberta-base-finetuned-ner | 75242e92d822e004c4f69f9647c28143aa67194a | 2022-07-09T07:01:32.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:plod-filtered",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ankurani | null | ankurani/roberta-base-finetuned-ner | 6 | null | transformers | 15,651 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- plod-filtered
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: plod-filtered
type: plod-filtered
args: PLODfiltered
metrics:
- name: Precision
type: precision
value: 0.9626409382419665
- name: Recall
type: recall
value: 0.9524847822076014
- name: F1
type: f1
value: 0.9575359305291788
- name: Accuracy
type: accuracy
value: 0.9534751355294295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the plod-filtered dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1152
- Precision: 0.9626
- Recall: 0.9525
- F1: 0.9575
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1023 | 0.5 | 7000 | 0.1345 | 0.9601 | 0.9507 | 0.9554 | 0.9512 |
| 0.1166 | 0.99 | 14000 | 0.1152 | 0.9626 | 0.9525 | 0.9575 | 0.9535 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
FabianWillner/distilbert-base-uncased-finetuned-squad | 0a238477fe98d0b0fdfd6f3ed7800130961c459e | 2022-06-12T12:09:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | FabianWillner | null | FabianWillner/distilbert-base-uncased-finetuned-squad | 6 | null | transformers | 15,652 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [FabianWillner/distilbert-base-uncased-finetuned-squad](https://huggingface.co/FabianWillner/distilbert-base-uncased-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jxuhf/Fine-tuning-text-classification-model-Habana-Gaudi | dff06b4918b33804610527d6d0f1d30d5ea215c7 | 2022-05-10T19:39:44.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jxuhf | null | jxuhf/Fine-tuning-text-classification-model-Habana-Gaudi | 6 | null | transformers | 15,653 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8823529411764706
- name: F1
type: f1
value: 0.9180887372013652
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3680
- Accuracy: 0.8824
- F1: 0.9181
- Combined Score: 0.9002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+gitfe03f8c
- Datasets 2.1.0
- Tokenizers 0.12.1
|
selimonder/gptj-bswiki-2 | b9625b40ab6453a15a3e0f90933a1f78521bddc5 | 2022-05-10T09:07:06.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
]
| text-generation | false | selimonder | null | selimonder/gptj-bswiki-2 | 6 | null | transformers | 15,654 | Entry not found |
lucifermorninstar011/autotrain-defector_ner_multi-847927015 | a6f3d1a3064b456b44cbce357f18c36a41744b28 | 2022-05-10T13:41:09.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:lucifermorninstar011/autotrain-data-defector_ner_multi",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| token-classification | false | lucifermorninstar011 | null | lucifermorninstar011/autotrain-defector_ner_multi-847927015 | 6 | null | transformers | 15,655 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lucifermorninstar011/autotrain-data-defector_ner_multi
co2_eq_emissions: 132.80014666099797
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 847927015
- CO2 Emissions (in grams): 132.80014666099797
## Validation Metrics
- Loss: 0.028013793751597404
- Accuracy: 0.9904516251523853
- Precision: 0.9457584194138717
- Recall: 0.9496542594882692
- F1: 0.9477023356871265
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucifermorninstar011/autotrain-defector_ner_multi-847927015
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lucifermorninstar011/autotrain-defector_ner_multi-847927015", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucifermorninstar011/autotrain-defector_ner_multi-847927015", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
florentgbelidji/all-mpnet-base-v2__tweet_eval_emotion__classifier | 68d7515b7e668d4b5a9dde0889854b0b89c7700c | 2022-05-10T13:45:01.000Z | [
"pytorch",
"tensorboard",
"mpnet",
"text-classification",
"transformers"
]
| text-classification | false | florentgbelidji | null | florentgbelidji/all-mpnet-base-v2__tweet_eval_emotion__classifier | 6 | null | transformers | 15,656 | Entry not found |
sismetanin/ruroberta-ru-rusentitweet | b99bcb610bf76c41dc242490374e7dad804d6acf | 2022-05-10T23:58:12.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | sismetanin | null | sismetanin/ruroberta-ru-rusentitweet | 6 | null | transformers | 15,657 | precision recall f1-score support
negative 0.733238 0.778788 0.755327 660
neutral 0.757962 0.779963 0.768805 1068
positive 0.722793 0.728778 0.725773 483
skip 0.660714 0.501355 0.570108 369
speech 0.767857 0.868687 0.815166 99
accuracy 0.735349 2679
macro avg 0.728513 0.731514 0.727036 2679
weighted avg 0.732501 0.735349 0.732071 2679
Avg macro Precision 0.7297744200895245
Avg macro Recall 0.7248163039465004
Avg macro F1 0.7229310729744304
Avg weighted F1 0.7281243075011377 |
masakhane/m2m100_418M_en_twi_rel | 5641e30a65f76f7329349ec318821a62a402c0da | 2022-05-12T12:40:22.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_twi_rel | 6 | null | transformers | 15,658 | ---
license: afl-3.0
---
|
Dim0n4Nk/clip-roberta-finetuned | 51522a3dd1267f8335f9864803e4d560db5b469d | 2022-06-10T13:02:21.000Z | [
"pytorch",
"vision-text-dual-encoder",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Dim0n4Nk | null | Dim0n4Nk/clip-roberta-finetuned | 6 | null | transformers | 15,659 | Entry not found |
enoriega/kw_pubmed_10000_0.00006 | 22de6a7908f681f7a696b91b9553278fbe7326b5 | 2022-05-12T14:21:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | enoriega | null | enoriega/kw_pubmed_10000_0.00006 | 6 | null | transformers | 15,660 | Entry not found |
DioLiu/distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle | 878ccd2e26bd44642cd44c675892ae9f7bbc4e69 | 2022-05-12T11:04:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | DioLiu | null | DioLiu/distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle | 6 | null | transformers | 15,661 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0284
- Accuracy: 0.9971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0166 | 1.0 | 7783 | 0.0135 | 0.9965 |
| 0.0091 | 2.0 | 15566 | 0.0172 | 0.9968 |
| 0.0059 | 3.0 | 23349 | 0.0223 | 0.9968 |
| 0.0 | 4.0 | 31132 | 0.0332 | 0.9962 |
| 0.0001 | 5.0 | 38915 | 0.0284 | 0.9971 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
lucifermorninstar011/autotrain-lucifer_morningstar_job-859227344 | 6f31636618e062d7133cd6cc7c99df7eb89ddaf9 | 2022-05-12T12:09:44.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:lucifermorninstar011/autotrain-data-lucifer_morningstar_job",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| token-classification | false | lucifermorninstar011 | null | lucifermorninstar011/autotrain-lucifer_morningstar_job-859227344 | 6 | null | transformers | 15,662 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lucifermorninstar011/autotrain-data-lucifer_morningstar_job
co2_eq_emissions: 40.47286384195961
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 859227344
- CO2 Emissions (in grams): 40.47286384195961
## Validation Metrics
- Loss: 0.05327404662966728
- Accuracy: 0.9856485474332406
- Precision: 0.9272604680928872
- Recall: 0.9327554791725343
- F1: 0.9299998567273666
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucifermorninstar011/autotrain-lucifer_morningstar_job-859227344
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lucifermorninstar011/autotrain-lucifer_morningstar_job-859227344", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucifermorninstar011/autotrain-lucifer_morningstar_job-859227344", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
aiko/maeve-12-6-samsum | aa43b8f20948e08c7b759f3d678d04b77e076c89 | 2022-05-16T13:56:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:samsum",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | aiko | null | aiko/maeve-12-6-samsum | 6 | null | transformers | 15,663 | ---
language:
- en
tags:
- text2text-generation
- pytorch
license: "gpl-3.0"
datasets:
- samsum
widget:
- text: "Ruben has forgotten what the homework was. Alex tells him to ask the teacher."
example_title: "I forgot my homework"
- text: "Mac is lost at the zoo. Frank says he is at the gorilla exhibit. Charlie is going to see the minks."
example_title: "Very sunny"
- text: "Mac has started to date Dennis's mother. Dennis is going to beat him up."
example_title: "Not very sunny"
---
# Maeve - SAMSum
Maeve is a language model that is similar to BART in structure but trained specially using a CAT (Conditionally Adversarial Transformer).
This allows the model to learn to create long-form text from short entries with high degrees of control and coherence that are impossible to achieve with traditional transformers.
This specific model has been trained on the SAMSum dataset, and can invert summaries into full-length news articles. Feel free to try examples on the right!
|
juniorrios/distilbert_jur | 3ff469504b812824b9a31c7005135c6612fe8dad | 2022-05-14T03:02:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | juniorrios | null | juniorrios/distilbert_jur | 6 | null | transformers | 15,664 | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert_jur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_jur
This model is a fine-tuned version of [adalbertojunior/distilbert-portuguese-cased](https://huggingface.co/adalbertojunior/distilbert-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1800
- Cnpj Do Réu Precision: 0.4312
- Cnpj Do Réu Recall: 0.8545
- Cnpj Do Réu F1: 0.5732
- Cnpj Do Réu Number: 55
- Cpf Do Autor Precision: 0.3889
- Cpf Do Autor Recall: 0.8140
- Cpf Do Autor F1: 0.5263
- Cpf Do Autor Number: 43
- Data Da Petição Precision: 0.5294
- Data Da Petição Recall: 0.8780
- Data Da Petição F1: 0.6606
- Data Da Petição Number: 41
- Data Do Contrato Precision: 0.0
- Data Do Contrato Recall: 0.0
- Data Do Contrato F1: 0.0
- Data Do Contrato Number: 6
- Data Dos Fatos Precision: 0.1333
- Data Dos Fatos Recall: 0.2222
- Data Dos Fatos F1: 0.1667
- Data Dos Fatos Number: 9
- Datas Precision: 0.4282
- Datas Recall: 0.76
- Datas F1: 0.5477
- Datas Number: 200
- Jurisprudência Precision: 0.4088
- Jurisprudência Recall: 0.7475
- Jurisprudência F1: 0.5286
- Jurisprudência Number: 99
- Normativo Precision: 0.4337
- Normativo Recall: 0.7912
- Normativo F1: 0.5603
- Normativo Number: 637
- Valor Da Causa Precision: 0.5970
- Valor Da Causa Recall: 0.9091
- Valor Da Causa F1: 0.7207
- Valor Da Causa Number: 44
- Valor Da Multa – Tutela Provisória Precision: 0.4545
- Valor Da Multa – Tutela Provisória Recall: 0.625
- Valor Da Multa – Tutela Provisória F1: 0.5263
- Valor Da Multa – Tutela Provisória Number: 8
- Valor Dano Moral Precision: 0.4
- Valor Dano Moral Recall: 0.7097
- Valor Dano Moral F1: 0.5116
- Valor Dano Moral Number: 31
- Valor Danos Materiais/restituição Em Dobro Precision: 0.32
- Valor Danos Materiais/restituição Em Dobro Recall: 0.6486
- Valor Danos Materiais/restituição Em Dobro F1: 0.4286
- Valor Danos Materiais/restituição Em Dobro Number: 37
- Valores Precision: 0.4300
- Valores Recall: 0.7607
- Valores F1: 0.5494
- Valores Number: 351
- Overall Precision: 0.4291
- Overall Recall: 0.7739
- Overall F1: 0.5521
- Overall Accuracy: 0.9704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cnpj Do Réu Precision | Cnpj Do Réu Recall | Cnpj Do Réu F1 | Cnpj Do Réu Number | Cpf Do Autor Precision | Cpf Do Autor Recall | Cpf Do Autor F1 | Cpf Do Autor Number | Data Da Petição Precision | Data Da Petição Recall | Data Da Petição F1 | Data Da Petição Number | Data Do Contrato Precision | Data Do Contrato Recall | Data Do Contrato F1 | Data Do Contrato Number | Data Dos Fatos Precision | Data Dos Fatos Recall | Data Dos Fatos F1 | Data Dos Fatos Number | Datas Precision | Datas Recall | Datas F1 | Datas Number | Jurisprudência Precision | Jurisprudência Recall | Jurisprudência F1 | Jurisprudência Number | Normativo Precision | Normativo Recall | Normativo F1 | Normativo Number | Valor Da Causa Precision | Valor Da Causa Recall | Valor Da Causa F1 | Valor Da Causa Number | Valor Da Multa – Tutela Provisória Precision | Valor Da Multa – Tutela Provisória Recall | Valor Da Multa – Tutela Provisória F1 | Valor Da Multa – Tutela Provisória Number | Valor Dano Moral Precision | Valor Dano Moral Recall | Valor Dano Moral F1 | Valor Dano Moral Number | Valor Danos Materiais/restituição Em Dobro Precision | Valor Danos Materiais/restituição Em Dobro Recall | Valor Danos Materiais/restituição Em Dobro F1 | Valor Danos Materiais/restituição Em Dobro Number | Valores Precision | Valores Recall | Valores F1 | Valores Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-------------------------:|:----------------------:|:------------------:|:----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:---------------:|:------------:|:--------:|:------------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:-------------------:|:----------------:|:------------:|:----------------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:--------------------------------------------:|:-----------------------------------------:|:-------------------------------------:|:-----------------------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------------------------------------:|:-------------------------------------------------:|:---------------------------------------------:|:-------------------------------------------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.8463 | 1.0 | 561 | 0.0846 | 0.2183 | 0.5636 | 0.3147 | 55 | 0.0 | 0.0 | 0.0 | 43 | 0.44 | 0.8049 | 0.5690 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 9 | 0.3092 | 0.705 | 0.4299 | 200 | 0.1087 | 0.3030 | 0.16 | 99 | 0.2691 | 0.6923 | 0.3875 | 637 | 0.0 | 0.0 | 0.0 | 44 | 0.0 | 0.0 | 0.0 | 8 | 0.0 | 0.0 | 0.0 | 31 | 0.0 | 0.0 | 0.0 | 37 | 0.3630 | 0.9060 | 0.5183 | 351 | 0.2844 | 0.6368 | 0.3932 | 0.9666 |
| 0.0953 | 2.0 | 1122 | 0.1019 | 0.4087 | 0.8545 | 0.5529 | 55 | 0.33 | 0.7674 | 0.4615 | 43 | 0.4875 | 0.9512 | 0.6446 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 9 | 0.3647 | 0.93 | 0.5239 | 200 | 0.3109 | 0.7475 | 0.4392 | 99 | 0.3446 | 0.8148 | 0.4844 | 637 | 0.5538 | 0.8182 | 0.6606 | 44 | 0.0 | 0.0 | 0.0 | 8 | 0.0 | 0.0 | 0.0 | 31 | 0.0 | 0.0 | 0.0 | 37 | 0.4091 | 0.9174 | 0.5659 | 351 | 0.3693 | 0.8046 | 0.5062 | 0.9660 |
| 0.0713 | 3.0 | 1683 | 0.0842 | 0.3983 | 0.8545 | 0.5434 | 55 | 0.3689 | 0.8837 | 0.5205 | 43 | 0.4535 | 0.9512 | 0.6142 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 9 | 0.3635 | 0.925 | 0.5219 | 200 | 0.3303 | 0.7374 | 0.4562 | 99 | 0.3957 | 0.8399 | 0.5380 | 637 | 0.5467 | 0.9318 | 0.6891 | 44 | 0.0 | 0.0 | 0.0 | 8 | 0.2683 | 0.7097 | 0.3894 | 31 | 0.19 | 0.5135 | 0.2774 | 37 | 0.4435 | 0.8490 | 0.5826 | 351 | 0.3901 | 0.8309 | 0.5309 | 0.9693 |
| 0.0666 | 4.0 | 2244 | 0.0855 | 0.4052 | 0.8545 | 0.5497 | 55 | 0.3590 | 0.9767 | 0.5250 | 43 | 0.4937 | 0.9512 | 0.65 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 9 | 0.3792 | 0.91 | 0.5353 | 200 | 0.3504 | 0.8283 | 0.4925 | 99 | 0.3993 | 0.8650 | 0.5464 | 637 | 0.5882 | 0.9091 | 0.7143 | 44 | 0.0 | 0.0 | 0.0 | 8 | 0.3704 | 0.6452 | 0.4706 | 31 | 0.2429 | 0.4595 | 0.3178 | 37 | 0.4624 | 0.8946 | 0.6097 | 351 | 0.4063 | 0.8546 | 0.5508 | 0.9688 |
| 0.0578 | 5.0 | 2805 | 0.0812 | 0.4381 | 0.8364 | 0.5750 | 55 | 0.3535 | 0.8140 | 0.4930 | 43 | 0.5571 | 0.9512 | 0.7027 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 9 | 0.3827 | 0.865 | 0.5307 | 200 | 0.3689 | 0.7677 | 0.4984 | 99 | 0.4083 | 0.8493 | 0.5515 | 637 | 0.5429 | 0.8636 | 0.6667 | 44 | 0.0 | 0.0 | 0.0 | 8 | 0.3239 | 0.7419 | 0.4510 | 31 | 0.2958 | 0.5676 | 0.3889 | 37 | 0.4629 | 0.8718 | 0.6047 | 351 | 0.4130 | 0.8315 | 0.5519 | 0.9698 |
| 0.0527 | 6.0 | 3366 | 0.0892 | 0.4312 | 0.8545 | 0.5732 | 55 | 0.3448 | 0.9302 | 0.5031 | 43 | 0.5333 | 0.9756 | 0.6897 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.1053 | 0.4444 | 0.1702 | 9 | 0.3859 | 0.795 | 0.5196 | 200 | 0.3433 | 0.8081 | 0.4819 | 99 | 0.4114 | 0.8352 | 0.5513 | 637 | 0.5634 | 0.9091 | 0.6957 | 44 | 0.0 | 0.0 | 0.0 | 8 | 0.2947 | 0.9032 | 0.4444 | 31 | 0.2556 | 0.6216 | 0.3622 | 37 | 0.4526 | 0.8291 | 0.5855 | 351 | 0.4030 | 0.8225 | 0.5410 | 0.9702 |
| 0.0507 | 7.0 | 3927 | 0.0854 | 0.4167 | 0.8182 | 0.5521 | 55 | 0.3737 | 0.8605 | 0.5211 | 43 | 0.5507 | 0.9268 | 0.6909 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 9 | 0.415 | 0.83 | 0.5533 | 200 | 0.325 | 0.7879 | 0.4602 | 99 | 0.4103 | 0.8226 | 0.5475 | 637 | 0.6 | 0.9545 | 0.7368 | 44 | 0.3 | 0.375 | 0.3333 | 8 | 0.375 | 0.6774 | 0.4828 | 31 | 0.2632 | 0.5405 | 0.3540 | 37 | 0.4559 | 0.8689 | 0.5980 | 351 | 0.4151 | 0.8193 | 0.5511 | 0.9704 |
| 0.0469 | 8.0 | 4488 | 0.0896 | 0.4393 | 0.8545 | 0.5802 | 55 | 0.3776 | 0.8605 | 0.5248 | 43 | 0.5429 | 0.9268 | 0.6847 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.2353 | 0.4444 | 0.3077 | 9 | 0.4208 | 0.81 | 0.5538 | 200 | 0.3206 | 0.6768 | 0.4351 | 99 | 0.4137 | 0.8273 | 0.5515 | 637 | 0.6087 | 0.9545 | 0.7434 | 44 | 0.3846 | 0.625 | 0.4762 | 8 | 0.4082 | 0.6452 | 0.5000 | 31 | 0.2989 | 0.7027 | 0.4194 | 37 | 0.4581 | 0.8575 | 0.5972 | 351 | 0.4204 | 0.8174 | 0.5553 | 0.9704 |
| 0.0389 | 9.0 | 5049 | 0.0961 | 0.3853 | 0.7636 | 0.5122 | 55 | 0.375 | 0.8372 | 0.5180 | 43 | 0.5205 | 0.9268 | 0.6667 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.1875 | 0.3333 | 0.2400 | 9 | 0.4015 | 0.805 | 0.5358 | 200 | 0.3641 | 0.7172 | 0.4830 | 99 | 0.4279 | 0.8053 | 0.5588 | 637 | 0.5694 | 0.9318 | 0.7069 | 44 | 0.4167 | 0.625 | 0.5 | 8 | 0.375 | 0.7742 | 0.5053 | 31 | 0.2857 | 0.6486 | 0.3967 | 37 | 0.4551 | 0.8376 | 0.5898 | 351 | 0.4220 | 0.8020 | 0.5530 | 0.9708 |
| 0.037 | 10.0 | 5610 | 0.1132 | 0.4312 | 0.8545 | 0.5732 | 55 | 0.3663 | 0.8605 | 0.5139 | 43 | 0.5588 | 0.9268 | 0.6972 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.2857 | 0.4444 | 0.3478 | 9 | 0.4194 | 0.755 | 0.5393 | 200 | 0.3684 | 0.7071 | 0.4844 | 99 | 0.4339 | 0.7991 | 0.5624 | 637 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.4545 | 0.625 | 0.5263 | 8 | 0.4 | 0.6452 | 0.4938 | 31 | 0.2809 | 0.6757 | 0.3968 | 37 | 0.4458 | 0.8091 | 0.5749 | 351 | 0.4287 | 0.7880 | 0.5553 | 0.9707 |
| 0.0343 | 11.0 | 6171 | 0.1247 | 0.4404 | 0.8727 | 0.5854 | 55 | 0.37 | 0.8605 | 0.5175 | 43 | 0.5672 | 0.9268 | 0.7037 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.1905 | 0.4444 | 0.2667 | 9 | 0.3980 | 0.79 | 0.5293 | 200 | 0.3491 | 0.7475 | 0.4759 | 99 | 0.4318 | 0.8100 | 0.5633 | 637 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.4167 | 0.625 | 0.5 | 8 | 0.3962 | 0.6774 | 0.5 | 31 | 0.2989 | 0.7027 | 0.4194 | 37 | 0.4466 | 0.8462 | 0.5846 | 351 | 0.4235 | 0.8097 | 0.5561 | 0.9702 |
| 0.0315 | 12.0 | 6732 | 0.1284 | 0.4393 | 0.8545 | 0.5802 | 55 | 0.3627 | 0.8605 | 0.5103 | 43 | 0.5312 | 0.8293 | 0.6476 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.2857 | 0.4444 | 0.3478 | 9 | 0.4306 | 0.775 | 0.5536 | 200 | 0.3687 | 0.7374 | 0.4916 | 99 | 0.4275 | 0.8053 | 0.5585 | 637 | 0.5882 | 0.9091 | 0.7143 | 44 | 0.5455 | 0.75 | 0.6316 | 8 | 0.375 | 0.7742 | 0.5053 | 31 | 0.2718 | 0.7568 | 0.4000 | 37 | 0.4380 | 0.7550 | 0.5544 | 351 | 0.4233 | 0.7854 | 0.5501 | 0.9703 |
| 0.0294 | 13.0 | 7293 | 0.1179 | 0.4352 | 0.8545 | 0.5767 | 55 | 0.3673 | 0.8372 | 0.5106 | 43 | 0.5152 | 0.8293 | 0.6355 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.2 | 0.3333 | 0.25 | 9 | 0.4026 | 0.775 | 0.5299 | 200 | 0.3923 | 0.7172 | 0.5071 | 99 | 0.4230 | 0.7802 | 0.5486 | 637 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.4545 | 0.625 | 0.5263 | 8 | 0.4182 | 0.7419 | 0.5349 | 31 | 0.2632 | 0.5405 | 0.3540 | 37 | 0.4352 | 0.8034 | 0.5646 | 351 | 0.4202 | 0.7771 | 0.5454 | 0.9705 |
| 0.0288 | 14.0 | 7854 | 0.1260 | 0.4299 | 0.8364 | 0.5679 | 55 | 0.3889 | 0.8140 | 0.5263 | 43 | 0.5303 | 0.8537 | 0.6542 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.1579 | 0.3333 | 0.2143 | 9 | 0.4131 | 0.725 | 0.5263 | 200 | 0.3791 | 0.6970 | 0.4911 | 99 | 0.4332 | 0.7692 | 0.5543 | 637 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.4545 | 0.625 | 0.5263 | 8 | 0.4259 | 0.7419 | 0.5412 | 31 | 0.2903 | 0.4865 | 0.3636 | 37 | 0.4373 | 0.7949 | 0.5642 | 351 | 0.4272 | 0.7611 | 0.5472 | 0.9710 |
| 0.0272 | 15.0 | 8415 | 0.1348 | 0.4175 | 0.7818 | 0.5443 | 55 | 0.3977 | 0.8140 | 0.5344 | 43 | 0.5606 | 0.9024 | 0.6916 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.2353 | 0.4444 | 0.3077 | 9 | 0.4129 | 0.77 | 0.5375 | 200 | 0.3763 | 0.7071 | 0.4912 | 99 | 0.4282 | 0.7488 | 0.5448 | 637 | 0.5882 | 0.9091 | 0.7143 | 44 | 0.5556 | 0.625 | 0.5882 | 8 | 0.4231 | 0.7097 | 0.5301 | 31 | 0.2812 | 0.4865 | 0.3564 | 37 | 0.4281 | 0.8063 | 0.5593 | 351 | 0.4238 | 0.7611 | 0.5445 | 0.9709 |
| 0.025 | 16.0 | 8976 | 0.1656 | 0.4537 | 0.8909 | 0.6012 | 55 | 0.3854 | 0.8605 | 0.5324 | 43 | 0.5606 | 0.9024 | 0.6916 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.1765 | 0.3333 | 0.2308 | 9 | 0.4330 | 0.76 | 0.5517 | 200 | 0.3719 | 0.7475 | 0.4966 | 99 | 0.4176 | 0.8038 | 0.5497 | 637 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.4545 | 0.625 | 0.5263 | 8 | 0.4444 | 0.7742 | 0.5647 | 31 | 0.2903 | 0.7297 | 0.4154 | 37 | 0.4311 | 0.7578 | 0.5496 | 351 | 0.4216 | 0.7854 | 0.5487 | 0.9701 |
| 0.0229 | 17.0 | 9537 | 0.1802 | 0.4312 | 0.8545 | 0.5732 | 55 | 0.3854 | 0.8605 | 0.5324 | 43 | 0.5606 | 0.9024 | 0.6916 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.1111 | 0.2222 | 0.1481 | 9 | 0.4097 | 0.76 | 0.5324 | 200 | 0.3756 | 0.7475 | 0.5000 | 99 | 0.4184 | 0.8006 | 0.5496 | 637 | 0.5882 | 0.9091 | 0.7143 | 44 | 0.5 | 0.625 | 0.5556 | 8 | 0.4444 | 0.7742 | 0.5647 | 31 | 0.3288 | 0.6486 | 0.4364 | 37 | 0.4361 | 0.7578 | 0.5536 | 351 | 0.4204 | 0.7803 | 0.5464 | 0.9699 |
| 0.0214 | 18.0 | 10098 | 0.1728 | 0.4393 | 0.8545 | 0.5802 | 55 | 0.3956 | 0.8372 | 0.5373 | 43 | 0.5373 | 0.8780 | 0.6667 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.125 | 0.2222 | 0.16 | 9 | 0.425 | 0.765 | 0.5464 | 200 | 0.4088 | 0.7475 | 0.5286 | 99 | 0.4330 | 0.7912 | 0.5597 | 637 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.5 | 0.625 | 0.5556 | 8 | 0.4074 | 0.7097 | 0.5176 | 31 | 0.3171 | 0.7027 | 0.4370 | 37 | 0.4277 | 0.7664 | 0.5490 | 351 | 0.4282 | 0.7777 | 0.5523 | 0.9705 |
| 0.0211 | 19.0 | 10659 | 0.1710 | 0.4151 | 0.8 | 0.5466 | 55 | 0.4091 | 0.8372 | 0.5496 | 43 | 0.5294 | 0.8780 | 0.6606 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.1333 | 0.2222 | 0.1667 | 9 | 0.4298 | 0.75 | 0.5464 | 200 | 0.4111 | 0.7475 | 0.5305 | 99 | 0.4308 | 0.7771 | 0.5543 | 637 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.5 | 0.625 | 0.5556 | 8 | 0.3704 | 0.6452 | 0.4706 | 31 | 0.3143 | 0.5946 | 0.4112 | 37 | 0.4299 | 0.7692 | 0.5516 | 351 | 0.4280 | 0.7649 | 0.5488 | 0.9707 |
| 0.0204 | 20.0 | 11220 | 0.1800 | 0.4312 | 0.8545 | 0.5732 | 55 | 0.3889 | 0.8140 | 0.5263 | 43 | 0.5294 | 0.8780 | 0.6606 | 41 | 0.0 | 0.0 | 0.0 | 6 | 0.1333 | 0.2222 | 0.1667 | 9 | 0.4282 | 0.76 | 0.5477 | 200 | 0.4088 | 0.7475 | 0.5286 | 99 | 0.4337 | 0.7912 | 0.5603 | 637 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.4545 | 0.625 | 0.5263 | 8 | 0.4 | 0.7097 | 0.5116 | 31 | 0.32 | 0.6486 | 0.4286 | 37 | 0.4300 | 0.7607 | 0.5494 | 351 | 0.4291 | 0.7739 | 0.5521 | 0.9704 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Jeevesh8/6ep_bert_ft_cola-2 | 5b24ef49bca3d1975592ddee782b0ea246385fff | 2022-05-14T11:36:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-2 | 6 | null | transformers | 15,665 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-13 | a2eeba50a3dc0fe15b1c9d3621a96291b26fb0ae | 2022-05-14T11:54:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-13 | 6 | null | transformers | 15,666 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-49 | d4e2a1e875232d7991d2684304079b8b21622309 | 2022-05-14T13:20:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-49 | 6 | null | transformers | 15,667 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-53 | ad95b1f6bd73614775b652ced71a9e1cdf756b01 | 2022-05-14T13:27:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-53 | 6 | null | transformers | 15,668 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-59 | 6e5edd651f723459095ab762ec99e3129e9f85bd | 2022-05-14T13:37:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-59 | 6 | null | transformers | 15,669 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-66 | 7ee5da6b97703fae160c3a329cc2a5e7dd055173 | 2022-05-14T13:49:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-66 | 6 | null | transformers | 15,670 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-71 | 6d5cdd5b6ecdd25ae76246f7972f6d4aabfbb883 | 2022-05-14T13:57:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-71 | 6 | null | transformers | 15,671 | Entry not found |
prashanth/mbart-large-cc25-ge-en-to-hi | 672d5bf90254dc792c5712aeda95dab03a3cfd80 | 2022-05-15T17:11:05.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"dataset:hindi_english_machine_translation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | prashanth | null | prashanth/mbart-large-cc25-ge-en-to-hi | 6 | null | transformers | 15,672 | ---
tags:
- generated_from_trainer
datasets:
- hindi_english_machine_translation
metrics:
- bleu
model-index:
- name: mbart-large-cc25-ge-en-to-hi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: hindi_english_machine_translation
type: hindi_english_machine_translation
args: hi-en
metrics:
- name: Bleu
type: bleu
value: 4.5974
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-ge-en-to-hi
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3397
- Bleu: 4.5974
- Gen Len: 66.244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|
| 1.4602 | 1.0 | 135739 | 1.3397 | 4.5974 | 66.244 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 1.18.0
- Tokenizers 0.12.1
|
huggingtweets/dclblogger-loopifyyy | e9fcb0226d5e7377448cce28ba1a5f8963cf6de9 | 2022-05-15T15:32:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/dclblogger-loopifyyy | 6 | null | transformers | 15,673 | ---
language: en
thumbnail: http://www.huggingtweets.com/dclblogger-loopifyyy/1652628765621/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1472740175130230784/L7Xcs7wJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1480550067564163078/D90SnyUa_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matty & Loopify 🧙♂️</div>
<div style="text-align: center; font-size: 14px;">@dclblogger-loopifyyy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matty & Loopify 🧙♂️.
| Data | Matty | Loopify 🧙♂️ |
| --- | --- | --- |
| Tweets downloaded | 3250 | 3250 |
| Retweets | 62 | 117 |
| Short tweets | 494 | 867 |
| Tweets kept | 2694 | 2266 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pq5pxck/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dclblogger-loopifyyy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/as5uacn5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/as5uacn5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dclblogger-loopifyyy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aliosm/sha3bor-rhyme-detector-arabertv2-base | 444485e24d8dd5277070181d1ebcbe2ce21101d3 | 2022-05-28T09:33:47.000Z | [
"pytorch",
"bert",
"text-classification",
"ar",
"transformers",
"license:mit"
]
| text-classification | false | aliosm | null | aliosm/sha3bor-rhyme-detector-arabertv2-base | 6 | null | transformers | 15,674 | ---
language: ar
license: mit
widget:
- text: "إن العيون التي في طرفها حور [شطر] قتلننا ثم لم يحيين قتلانا"
- text: "إذا ما فعلت الخير ضوعف شرهم [شطر] وكل إناء بالذي فيه ينضح"
- text: "واحر قلباه ممن قلبه شبم [شطر] ومن بجسمي وحالي عنده سقم"
---
|
PSW/cnndm_0.1percent_randomsimins_seed42 | a4d7d40fcba428d1e6ccc9b76b317f7b318a973b | 2022-05-16T03:24:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_randomsimins_seed42 | 6 | null | transformers | 15,675 | Entry not found |
fancyerii/bert-finetuned-ner | c6a72c920350181f1d7d08408c545a8d9e19923b | 2022-05-16T05:35:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | fancyerii | null | fancyerii/bert-finetuned-ner | 6 | null | transformers | 15,676 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9387755102040817
- name: Recall
type: recall
value: 0.9522046449007069
- name: F1
type: f1
value: 0.9454423928481912
- name: Accuracy
type: accuracy
value: 0.9869606169423677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9388
- Recall: 0.9522
- F1: 0.9454
- Accuracy: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0857 | 1.0 | 1756 | 0.0635 | 0.9121 | 0.9359 | 0.9238 | 0.9830 |
| 0.0318 | 2.0 | 3512 | 0.0586 | 0.9245 | 0.9465 | 0.9354 | 0.9857 |
| 0.0222 | 3.0 | 5268 | 0.0592 | 0.9388 | 0.9522 | 0.9454 | 0.9870 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.6
|
huggingtweets/whoisaddison | 1080e0d6287b3f04d5b6178c423f9b1c946ac57a | 2022-05-16T23:21:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/whoisaddison | 6 | null | transformers | 15,677 | ---
language: en
thumbnail: http://www.huggingtweets.com/whoisaddison/1652743310695/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506656357658812421/_MY3c0Ua_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Addison Rae</div>
<div style="text-align: center; font-size: 14px;">@whoisaddison</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Addison Rae.
| Data | Addison Rae |
| --- | --- |
| Tweets downloaded | 3204 |
| Retweets | 459 |
| Short tweets | 956 |
| Tweets kept | 1789 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qyvvw3o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @whoisaddison's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zojwhval) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zojwhval/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/whoisaddison')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Datasaur/distilbert-base-uncased-finetuned-ag-news | 0e86bbf1460575413714140816dfb0aaa56f711f | 2022-07-29T16:36:20.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:ag-news",
"transformers",
"license:apache-2.0"
]
| text-classification | false | Datasaur | null | Datasaur/distilbert-base-uncased-finetuned-ag-news | 6 | null | transformers | 15,678 | ---
language: en
license: apache-2.0
datasets:
- ag-news
--- |
CEBaB/lstm.CEBaB.absa.exclusive.seed_42 | c81b778b43b166f927172d18f525146b002c08d6 | 2022-05-17T20:08:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.exclusive.seed_42 | 6 | null | transformers | 15,679 | Entry not found |
adalbertojunior/rpt | be6e4e1f0703dfe54e0c72436b0238949d48a451 | 2022-05-18T14:05:51.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | adalbertojunior | null | adalbertojunior/rpt | 6 | null | transformers | 15,680 | Entry not found |
nqcccccc/phobert-textclassification | ae9325e706b14d47c16001f015e2d702dc268a99 | 2022-05-18T06:50:15.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | nqcccccc | null | nqcccccc/phobert-textclassification | 6 | null | transformers | 15,681 | Entry not found |
aakorolyova/outcome_similarity | c4040d2a2f33cab84c760ac313dbe268e659a09e | 2022-05-22T15:50:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | aakorolyova | null | aakorolyova/outcome_similarity | 6 | null | transformers | 15,682 | <h1>Model description</h1>
This is a fine-tuned BioBERT model for text pair classification, namely for identifying pairs of clinical trial outcomes' mentions that refeer to the same outcome (e.g. "overall survival in patients with oesophageal squamous cell carcinoma and PD-L1 combined positive score (CPS) of 10 or more" and "overall survival" can be considered to refer to the same outcome, while "overall survival" and "progression-free survival" refer to different outcomes).
This is the second version of the model; the original model development was reported in:
Anna Koroleva, Patrick Paroubek. Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations. Journal of Biomedical Informatics – X, 2019 https://www.sciencedirect.com/science/article/pii/S2590177X19300575
The original work was conducted within the scope of the Assisted authoring for avoiding inadequate claims in scientific reporting PhD project of the Methods for Research on Research (MiRoR, http://miror-ejd.eu/) program.
Model creator: Anna Koroleva
<h1>Intended uses & limitations</h1>
The model was originally intended to be used as a part of spin (unjustified presentation of trial results) detection pipeline in articles reporting Randomised controlled trials (see Anna Koroleva, Sanjay Kamath, Patrick MM Bossuyt, Patrick Paroubek. DeSpin: a prototype system for detecting spin in biomedical publications. Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing. https://aclanthology.org/2020.bionlp-1.5/). It can be used for any task requiring identification of pairs of outcome mentions referring to the same outcome.
The main limitation is that the model was trained on a fairly small sample of data annotated by a single annotator. Annotating more data or involvig more annotators was not possiblw within the PhD project.
<h1>How to use</h1>
The model should be used with the BioBERT tokeniser. A sample code for getting model predictions is below:
```
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification
from transformers import AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForSequenceClassification.from_pretrained(r'aakorolyova/outcome_similarity')
out1 = 'overall survival'
out2 = 'overall survival in patients with oesophageal squamous cell carcinoma and PD-L1 combined positive score (CPS) of 10 or more'
tokenized_input = tokenizer(out1, out2, padding="max_length", truncation=True, return_tensors='pt')
output = model_similarity(**tokenized_input)['logits']
output = np.argmax(output.detach().numpy(), axis=1)
print(output)
```
Some more useful functions can be found in or Github repository: https://github.com/aakorolyova/DeSpin-2.0
<h1>Training data</h1>
Training data can be found in https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Outcome_similarity
<h1>Training procedure</h1>
The model was fine-tuned using Huggingface Trainer API. Training scripts can be found in https://github.com/aakorolyova/DeSpin-2.0
<h1>Evaluation</h1>
Precision: 86.67%
Recall: 92.86%
F1: 89.66%
|
Jeevesh8/512seq_len_6ep_bert_ft_cola-71 | 361d5df3d7ea826d644f90df1de2b9f71aa8c1e6 | 2022-05-18T18:55:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-71 | 6 | null | transformers | 15,683 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-75 | 5ccff53798827acd151b24702f985880ea041e65 | 2022-05-18T19:02:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-75 | 6 | null | transformers | 15,684 | Entry not found |
charsiu/g2p_multilingual_byT5_tiny_12_layers | 94b9bb017a9683bae47050850c39f740f386e9f2 | 2022-05-19T05:03:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | charsiu | null | charsiu/g2p_multilingual_byT5_tiny_12_layers | 6 | null | transformers | 15,685 | Entry not found |
84rry/84rry-xlsr-53-arabic | e338b0a5cb21c932b987beccbf3a6f361a23e365 | 2022-05-19T16:53:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | 84rry | null | 84rry/84rry-xlsr-53-arabic | 6 | null | transformers | 15,686 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: 84rry-xlsr-53-arabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 84rry-xlsr-53-arabic
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0025
- Wer: 0.4977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4906 | 2.25 | 500 | 1.3179 | 0.8390 |
| 0.8851 | 4.5 | 1000 | 0.7385 | 0.6221 |
| 0.6884 | 6.76 | 1500 | 0.7005 | 0.5765 |
| 0.5525 | 9.01 | 2000 | 0.6931 | 0.5610 |
| 0.474 | 11.26 | 2500 | 0.7977 | 0.5560 |
| 0.3976 | 13.51 | 3000 | 0.7750 | 0.5375 |
| 0.343 | 15.76 | 3500 | 0.7553 | 0.5206 |
| 0.2838 | 18.02 | 4000 | 0.8162 | 0.5099 |
| 0.2369 | 20.27 | 4500 | 0.8574 | 0.5124 |
| 0.2298 | 22.52 | 5000 | 0.8848 | 0.5057 |
| 0.1727 | 24.77 | 5500 | 0.9193 | 0.5070 |
| 0.1675 | 27.03 | 6000 | 0.9959 | 0.4988 |
| 0.1457 | 29.28 | 6500 | 1.0025 | 0.4977 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
d4riushbahrami/distilbert-base-uncased-finetuned-emotion | e2991eab772990989c9b9d972b03636473f118d7 | 2022-05-19T20:45:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | d4riushbahrami | null | d4riushbahrami/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,687 | Entry not found |
mateocolina/bert-finetuned-ner | 025ea24470606b7d48591a304a0a7143c88e0d0f | 2022-05-30T17:44:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | mateocolina | null | mateocolina/bert-finetuned-ner | 6 | null | transformers | 15,688 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9348582794629537
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9419624217118998
- name: Accuracy
type: accuracy
value: 0.9854889032789781
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.9349
- Recall: 0.9492
- F1: 0.9420
- Accuracy: 0.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.088 | 1.0 | 1756 | 0.0654 | 0.9144 | 0.9403 | 0.9271 | 0.9831 |
| 0.0395 | 2.0 | 3512 | 0.0605 | 0.9274 | 0.9482 | 0.9377 | 0.9851 |
| 0.0213 | 3.0 | 5268 | 0.0619 | 0.9349 | 0.9492 | 0.9420 | 0.9855 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
aware-ai/wav2vec2-base-german | 8b1cd7b5ca65314d057da8e51234f8b713026ca2 | 2022-05-30T05:45:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"transformers",
"mozilla-foundation/common_voice_9_0",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | aware-ai | null | aware-ai/wav2vec2-base-german | 6 | null | transformers | 15,689 | ---
language:
- de
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_9_0
- generated_from_trainer
model-index:
- name: wav2vec2-base-german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-german
This model is a fine-tuned version of [wav2vec2-base-german](https://huggingface.co/wav2vec2-base-german) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3190
- Wer: 0.2659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3527 | 1.0 | 887 | 0.3176 | 0.2658 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.0
- Tokenizers 0.12.1
|
connectivity/feather_berts_25 | d856aeebfb533744881f546ed311a9ea99c8c3d6 | 2022-05-21T14:28:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_25 | 6 | null | transformers | 15,690 | Entry not found |
connectivity/feather_berts_32 | 6fd8c4ff80058cd5689dfc6aba681483aa0f0e4a | 2022-05-21T14:28:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_32 | 6 | null | transformers | 15,691 | Entry not found |
connectivity/feather_berts_40 | 9abd9d988d7e9b761c73359a27075582459599a1 | 2022-05-21T14:28:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_40 | 6 | null | transformers | 15,692 | Entry not found |
connectivity/feather_berts_43 | 74281c7b48b0dcb5bec05d056b276e55093fe76a | 2022-05-21T14:28:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_43 | 6 | null | transformers | 15,693 | Entry not found |
connectivity/feather_berts_87 | 32481e81a97014a8270aad3838f50e7b7fdd076b | 2022-05-21T14:30:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_87 | 6 | null | transformers | 15,694 | Entry not found |
connectivity/bert_ft_qqp-5 | 819f3390f8ec71288314500645eabbb5dd1b43fb | 2022-05-21T16:31:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-5 | 6 | null | transformers | 15,695 | Entry not found |
connectivity/bert_ft_qqp-9 | 976f3f5993a09ea8bed5f4e001827e4e595f941f | 2022-05-21T16:31:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-9 | 6 | null | transformers | 15,696 | Entry not found |
connectivity/cola_6ep_ft-38 | 965ed15415a5c08195f240296d2c6c81f409417c | 2022-05-21T16:43:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/cola_6ep_ft-38 | 6 | null | transformers | 15,697 | Entry not found |
sanjay-m1/grammar-corrector | bb3fb962e4384f78e3ab70305bf3870718a21b59 | 2022-05-22T09:49:54.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sanjay-m1 | null | sanjay-m1/grammar-corrector | 6 | null | transformers | 15,698 | ## Model description
T5 model trained for Grammar Correction. This model corrects grammatical mistakes in input sentences
### Dataset Description
The T5-base model has been trained on C4_200M dataset.
### Model in Action 🚀
```
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'deep-learning-analytics/GrammarCorrector'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device)
def correct_grammar(input_text,num_return_sequences):
batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=64, return_tensors="pt").to(torch_device)
translated = model.generate(**batch,max_length=64,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text
```
### Example Usage
```
text = 'He are moving here.'
print(correct_grammar(text, num_return_sequences=2))
['He is moving here.', 'He is moving here now.']
```
Another example
```
text = 'Cat drinked milk'
print(correct_grammar(text, num_return_sequences=2))
['Cat drank milk.', 'Cat drink milk.']
```
Model Developed by [Priya-Dwivedi](https://www.linkedin.com/in/priyanka-dwivedi-6864362) |
viviastaari/finetuning-sentiment-analysis-en-id | 7f2e6727c23238d5b8eb513ebc32e45af87ff735 | 2022-05-23T02:35:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | viviastaari | null | viviastaari/finetuning-sentiment-analysis-en-id | 6 | null | transformers | 15,699 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-sentiment-analysis-en-id
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-analysis-en-id
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1654
- Accuracy: 0.9527
- F1: 0.9646
- Precision: 0.9641
- Recall: 0.9652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4566 | 1.0 | 1602 | 0.3666 | 0.8473 | 0.8909 | 0.8530 | 0.9323 |
| 0.3458 | 2.0 | 3204 | 0.2193 | 0.9238 | 0.9432 | 0.9410 | 0.9454 |
| 0.2362 | 3.0 | 4806 | 0.1654 | 0.9527 | 0.9646 | 0.9641 | 0.9652 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.