modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
karthid/distilbert-base-uncased-finetuned-emotion | 3acd64528467d5b8ad01d403d955bead16b5f02b | 2022-06-28T14:25:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | karthid | null | karthid/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,600 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9239800027803069
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8568 | 1.0 | 250 | 0.3402 | 0.901 | 0.8970 |
| 0.2612 | 2.0 | 500 | 0.2270 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
philschmid/gpu-xlm-roberta-large-amazon-massive | cf755e436ccb75ec2773c3c4af4bcb7e5b134495 | 2022-06-30T19:57:39.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | philschmid | null | philschmid/gpu-xlm-roberta-large-amazon-massive | 8 | null | transformers | 13,601 | Entry not found |
annahaz/distilbert-base-multilingual-cased-finetuned-misogyny-sexism-multilingual | 1268d3a381451bf45f3b61e0f536be5ed5880250 | 2022-06-29T01:29:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | annahaz | null | annahaz/distilbert-base-multilingual-cased-finetuned-misogyny-sexism-multilingual | 8 | null | transformers | 13,602 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-multilingual-cased-finetuned-misogyny-sexism-multilingual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-misogyny-sexism-multilingual
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2382
- Accuracy: 0.8435
- F1: 0.7857
- Precision: 0.7689
- Recall: 0.8031
- Mae: 0.1565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3663 | 1.0 | 2062 | 0.3696 | 0.8363 | 0.7605 | 0.7967 | 0.7274 | 0.1637 |
| 0.2937 | 2.0 | 4124 | 0.3592 | 0.8504 | 0.7891 | 0.7948 | 0.7834 | 0.1496 |
| 0.2189 | 3.0 | 6186 | 0.4189 | 0.8442 | 0.7855 | 0.7727 | 0.7987 | 0.1558 |
| 0.1418 | 4.0 | 8248 | 0.6393 | 0.8409 | 0.7863 | 0.7558 | 0.8194 | 0.1591 |
| 0.1091 | 5.0 | 10310 | 0.7583 | 0.8284 | 0.7794 | 0.7207 | 0.8486 | 0.1716 |
| 0.0901 | 6.0 | 12372 | 0.8695 | 0.8410 | 0.7836 | 0.7628 | 0.8055 | 0.1590 |
| 0.0562 | 7.0 | 14434 | 1.0722 | 0.8405 | 0.7838 | 0.7600 | 0.8092 | 0.1595 |
| 0.0444 | 8.0 | 16496 | 1.0797 | 0.8433 | 0.7804 | 0.7815 | 0.7794 | 0.1567 |
| 0.0227 | 9.0 | 18558 | 1.1605 | 0.8429 | 0.7823 | 0.7743 | 0.7906 | 0.1571 |
| 0.0131 | 10.0 | 20620 | 1.2382 | 0.8435 | 0.7857 | 0.7689 | 0.8031 | 0.1565 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Smith123/tiny-bert-sst2-distilled_L6_H128 | 53b38cdfc2eef1784127720f35ec69439098e960 | 2022-06-29T11:09:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Smith123 | null | Smith123/tiny-bert-sst2-distilled_L6_H128 | 8 | null | transformers | 13,603 | Entry not found |
Jeevesh8/goog_bert_ft_cola-30 | 4f6928567682aa230ecab16d68c05a02bb3f0d32 | 2022-06-29T17:33:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-30 | 8 | null | transformers | 13,604 | Entry not found |
Jeevesh8/goog_bert_ft_cola-32 | 2ffe1553378cea690b7b23bb7cafd066edd5d7fb | 2022-06-29T17:33:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-32 | 8 | null | transformers | 13,605 | Entry not found |
Jeevesh8/goog_bert_ft_cola-28 | 8fdf5a09d785469101a0c20290d41ca93d5cd31a | 2022-06-29T17:33:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-28 | 8 | null | transformers | 13,606 | Entry not found |
Jeevesh8/goog_bert_ft_cola-34 | d50dcf789eb54a39a28fa04f9472f5992e3638a6 | 2022-06-29T17:34:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-34 | 8 | null | transformers | 13,607 | Entry not found |
Jeevesh8/goog_bert_ft_cola-37 | a78091cbe50c85c0d8d3a36ff7580d1039b15f9e | 2022-06-29T17:34:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-37 | 8 | null | transformers | 13,608 | Entry not found |
Jeevesh8/goog_bert_ft_cola-36 | bbedfe4ab9261a9f0841b470c32f39638ee43100 | 2022-06-29T17:33:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-36 | 8 | null | transformers | 13,609 | Entry not found |
Jeevesh8/goog_bert_ft_cola-41 | 781a8079d4294d0887e24efd38b62ba13ce208fb | 2022-06-29T17:34:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-41 | 8 | null | transformers | 13,610 | Entry not found |
Jeevesh8/goog_bert_ft_cola-43 | fb1944221861e8be26750e40ac3106d9007c8087 | 2022-06-29T17:34:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-43 | 8 | null | transformers | 13,611 | Entry not found |
Jeevesh8/goog_bert_ft_cola-39 | e6ff630bdd84689b001cbeb17acf80c411c28e9e | 2022-06-29T17:34:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-39 | 8 | null | transformers | 13,612 | Entry not found |
Jeevesh8/goog_bert_ft_cola-42 | 06ee6ac14a1ce29b419dda360ab1eeb58e3523f4 | 2022-06-29T17:34:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-42 | 8 | null | transformers | 13,613 | Entry not found |
Jeevesh8/goog_bert_ft_cola-47 | 2bc4d527ab8e73f9767b0a7531df53300f672d6a | 2022-06-29T17:34:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-47 | 8 | null | transformers | 13,614 | Entry not found |
Jeevesh8/goog_bert_ft_cola-38 | af542e6fbe686b54e6008652714730863a5d8d80 | 2022-06-29T17:34:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-38 | 8 | null | transformers | 13,615 | Entry not found |
Jeevesh8/goog_bert_ft_cola-40 | 00472cb34ecfdf72bf470ff749358de2fedb1076 | 2022-06-29T17:34:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-40 | 8 | null | transformers | 13,616 | Entry not found |
Jeevesh8/goog_bert_ft_cola-71 | f483ff420e4813f602e37bd29b27a2cb2f6ffb66 | 2022-06-29T17:32:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-71 | 8 | null | transformers | 13,617 | Entry not found |
Jeevesh8/goog_bert_ft_cola-75 | 7bdfcd8a4e79993ad683c78d52c9d9f16e4f6844 | 2022-06-29T17:33:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-75 | 8 | null | transformers | 13,618 | Entry not found |
Jeevesh8/goog_bert_ft_cola-69 | d2bc1428f94ec11a22630056f58615f85630d9b6 | 2022-06-29T17:33:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-69 | 8 | null | transformers | 13,619 | Entry not found |
Jeevesh8/goog_bert_ft_cola-63 | b1426c338665cd06f4b1fc0c33e02d36dcd0abfd | 2022-06-29T17:33:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-63 | 8 | null | transformers | 13,620 | Entry not found |
Jeevesh8/goog_bert_ft_cola-53 | 399d9d58887cfb4ce0f5f7d91d06c18213b5e6e9 | 2022-06-29T17:34:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-53 | 8 | null | transformers | 13,621 | Entry not found |
Jeevesh8/goog_bert_ft_cola-57 | a1e20fe380ecd27749966e6b78d93991ebd333c5 | 2022-06-29T17:34:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-57 | 8 | null | transformers | 13,622 | Entry not found |
Jeevesh8/goog_bert_ft_cola-73 | da6ca0b0ebdd9b454d8545bc627a41f41cf51979 | 2022-06-29T17:33:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-73 | 8 | null | transformers | 13,623 | Entry not found |
Jeevesh8/goog_bert_ft_cola-54 | 573b79b3cec326573ffa46a1e9fca18db367ce7e | 2022-06-29T17:34:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-54 | 8 | null | transformers | 13,624 | Entry not found |
Jeevesh8/goog_bert_ft_cola-72 | c3d946dc71b026fe54911272fb96a409a91c2c13 | 2022-06-29T17:33:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-72 | 8 | null | transformers | 13,625 | Entry not found |
Jeevesh8/goog_bert_ft_cola-50 | f1d1dc6563dc8c2c872b98c31fbbee23740c6091 | 2022-06-29T17:34:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-50 | 8 | null | transformers | 13,626 | Entry not found |
Jeevesh8/goog_bert_ft_cola-70 | 61b3f82e773785c91df7f62180dab065023a8f60 | 2022-06-29T17:36:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-70 | 8 | null | transformers | 13,627 | Entry not found |
Jeevesh8/goog_bert_ft_cola-67 | 0823fb30b595cce40022e3ff998d262bd392c9e8 | 2022-06-29T17:32:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-67 | 8 | null | transformers | 13,628 | Entry not found |
Jeevesh8/goog_bert_ft_cola-59 | ab0e8b6f434399edfce6c074c8a54c6b03077c56 | 2022-06-29T17:33:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-59 | 8 | null | transformers | 13,629 | Entry not found |
Jeevesh8/goog_bert_ft_cola-62 | f5911e68f0faefc7f09810f5dd974d9982064833 | 2022-06-29T17:33:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-62 | 8 | null | transformers | 13,630 | Entry not found |
Jeevesh8/goog_bert_ft_cola-66 | 6d2169245852eac0d501984683745067152cef76 | 2022-06-29T17:35:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-66 | 8 | null | transformers | 13,631 | Entry not found |
Jeevesh8/goog_bert_ft_cola-76 | c65a9ebbea0026372768e01fb3fa7108978a8f84 | 2022-06-29T17:34:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-76 | 8 | null | transformers | 13,632 | Entry not found |
Jeevesh8/goog_bert_ft_cola-86 | 74917213a0db55137fb1dccb9f951760f5dcb85c | 2022-06-29T17:35:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-86 | 8 | null | transformers | 13,633 | Entry not found |
Jeevesh8/goog_bert_ft_cola-87 | 7b8b0ecf47df25cac54d0e9edcdc8d5dcc7ec5dd | 2022-06-29T17:34:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-87 | 8 | null | transformers | 13,634 | Entry not found |
Jeevesh8/goog_bert_ft_cola-84 | e765f9d090cb301226aa756d11028887c73f1507 | 2022-06-29T17:34:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-84 | 8 | null | transformers | 13,635 | Entry not found |
Jeevesh8/goog_bert_ft_cola-79 | 6b45ea6ae8a91750c51758780f7a34313cf9dda8 | 2022-06-29T17:34:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-79 | 8 | null | transformers | 13,636 | Entry not found |
Jeevesh8/goog_bert_ft_cola-80 | b14bea6ed189c58dc07aede97a956e973e1a61d5 | 2022-06-29T17:34:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-80 | 8 | null | transformers | 13,637 | Entry not found |
ardauzunoglu/opus-mt-en-trk-finetuned-en-to-tr | 4fa4f189d7f917a4b3482e1ab49c25d5300b506f | 2022-06-30T13:02:40.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | ardauzunoglu | null | ardauzunoglu/opus-mt-en-trk-finetuned-en-to-tr | 8 | null | transformers | 13,638 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-trk-finetuned-en-to-tr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: tr-en
metrics:
- name: Bleu
type: bleu
value: 11.8334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-trk-finetuned-en-to-tr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-trk](https://huggingface.co/Helsinki-NLP/opus-mt-en-trk) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9617
- Bleu: 11.8334
- Gen Len: 33.4745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.3129 | 1.0 | 12860 | 2.0276 | 11.1299 | 33.7083 |
| 1.1484 | 2.0 | 25720 | 1.9789 | 11.4466 | 33.3876 |
| 1.0854 | 3.0 | 38580 | 1.9617 | 11.8334 | 33.4745 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Akihiro2/bert-finetuned-squad | aa805cf9e1600825d2ff8362cfd8bd066869400a | 2022-06-30T07:20:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Akihiro2 | null | Akihiro2/bert-finetuned-squad | 8 | null | transformers | 13,639 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
luffycodes/t5_small_v1 | 749fe99e24d6c9f10c34799808b3617b06731796 | 2022-07-01T06:18:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | luffycodes | null | luffycodes/t5_small_v1 | 8 | null | transformers | 13,640 | Entry not found |
dminiotas05/distilbert-base-uncased-finetuned-ft500 | 65bf0259cdf4566e0e2d9307b547b0eca1458c60 | 2022-06-30T16:57:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dminiotas05 | null | dminiotas05/distilbert-base-uncased-finetuned-ft500 | 8 | null | transformers | 13,641 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft500
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1340
- Accuracy: 0.5433
- F1: 0.5118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.16 | 1.0 | 188 | 1.0855 | 0.5493 | 0.4985 |
| 1.0291 | 2.0 | 376 | 1.0792 | 0.5587 | 0.5114 |
| 0.9661 | 3.0 | 564 | 1.0798 | 0.558 | 0.5267 |
| 0.9104 | 4.0 | 752 | 1.0935 | 0.5447 | 0.5136 |
| 0.8611 | 5.0 | 940 | 1.1340 | 0.5433 | 0.5118 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Sayan01/tiny-bert-mnli-m-distilled | 69a372c41db00242bb858d9a306bbb2251ccd679 | 2022-07-02T23:44:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sayan01 | null | Sayan01/tiny-bert-mnli-m-distilled | 8 | null | transformers | 13,642 | Entry not found |
Hyeongdon/t5-large-qgen-SciQ | bb93eb047c3af33de3aebf396cfeae30bf585af8 | 2022-07-03T11:22:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | Hyeongdon | null | Hyeongdon/t5-large-qgen-SciQ | 8 | null | transformers | 13,643 | ---
license: apache-2.0
---
T5-large Distractor generation model fine-tuned on SciQ dataset.
Input Format
```
{correct_answer} <sep> {context}
```
The paper is not published yet.
|
svalabs/german-gpl-adapted-covid | 1d77df9e895e8516e451e5506b45b4ebd0124751 | 2022-07-01T08:05:54.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | svalabs | null | svalabs/german-gpl-adapted-covid | 8 | null | sentence-transformers | 13,644 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# svalabs/german-gpl-adapted-covid
This is a german on covid adapted [sentence-transformers](https://www.SBERT.net) model:
It is adapted on covid related documents using the [GPL](https://github.com/UKPLab/gpl) integration of [Haystack](https://github.com/deepset-ai/haystack). We used the [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) as CrossEncoder and [svalabs/mt5-large-german-query-gen-v1](https://huggingface.co/svalabs/mt5-large-german-query-gen-v1) for query generation.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
from transformers import AutoTokenizer, AutoModel
org_model = SentenceTransformer("sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch")
org_model.max_seq_length = max_seq_length
model = SentenceTransformer('svalabs/german-gpl-adapted-covid')
def show_examples(model):
query = "Wie wird Covid-19 übermittelt"
docs = [
"Corona ist sehr ansteckend",
"Corona wird über die Luft verbreitet",
"Ebola wird durch direkten Kontakt mit Blut übertragen",
"HIV wird durch Sex oder den Austausch von Nadeln übertragen",
"Polio wird durch kontaminiertes Wasser oder Lebensmittel übertragen",
]
query_emb = model.encode(query)
docs_emb = model.encode(docs)
scores = util.dot_score(query_emb, docs_emb)[0]
doc_scores = sorted(zip(docs, scores), key=lambda x: x[1], reverse=True)
print("Query:", query)
for doc, score in doc_scores:
# print(doc, score)
print(f"{score:0.02f}\t{doc}")
print("Original Model")
show_examples(org_model)
print("\n\nAdapted Model")
show_examples(model)
```
## Evaluation Results
```
Original Model
Query: Wie wird Covid-19 übermittelt
33.01 HIV wird durch Sex oder den Austausch von Nadeln übertragen
32.78 Polio wird durch kontaminiertes Wasser oder Lebensmittel übertragen
29.10 Corona wird über die Luft verbreitet
24.41 Ebola wird durch direkten Kontakt mit Blut übertragen
10.85 Corona ist sehr ansteckend
Adapted Model
Query: Wie wird Covid-19 übermittelt
29.82 Corona wird über die Luft verbreitet
27.44 Polio wird durch kontaminiertes Wasser oder Lebensmittel übertragen
24.89 Ebola wird durch direkten Kontakt mit Blut übertragen
23.81 HIV wird durch Sex oder den Austausch von Nadeln übertragen
20.03 Corona ist sehr ansteckend
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 125 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 12,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 200, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
dminiotas05/distilbert-base-uncased-finetuned-ft500_4class | eae38ef0ff7b892c08d9a37843ba58895fa7075e | 2022-07-01T12:43:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dminiotas05 | null | dminiotas05/distilbert-base-uncased-finetuned-ft500_4class | 8 | null | transformers | 13,645 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft500_4class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft500_4class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1343
- Accuracy: 0.4853
- F1: 0.4777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1837 | 1.0 | 188 | 1.1606 | 0.4313 | 0.4104 |
| 1.0972 | 2.0 | 376 | 1.0929 | 0.488 | 0.4697 |
| 1.0343 | 3.0 | 564 | 1.1017 | 0.4893 | 0.4651 |
| 0.9781 | 4.0 | 752 | 1.1065 | 0.4993 | 0.4900 |
| 0.9346 | 5.0 | 940 | 1.1343 | 0.4853 | 0.4777 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
WalidLak/Testmodel | e6013edb8e0be33651e8a6d771b79a650f6cf3b6 | 2022-07-01T19:33:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | WalidLak | null | WalidLak/Testmodel | 8 | null | sentence-transformers | 13,646 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 207 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 7,
"evaluation_steps": 500,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 145,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Aktsvigun/bart-base-aeslc-705525 | e5fcb1b87d2b87e029463a884235dc19277a8003 | 2022-07-01T15:27:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base-aeslc-705525 | 8 | null | transformers | 13,647 | Entry not found |
Eleven/distilbert-base-uncased-finetuned-news | 65996b819248b2e61f053f05ad454a53bf8a877f | 2022-07-02T17:44:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Eleven | null | Eleven/distilbert-base-uncased-finetuned-news | 8 | null | transformers | 13,648 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- Accuracy: 0.9447
- F1: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2355 | 1.0 | 1875 | 0.1790 | 0.94 | 0.9401 |
| 0.1406 | 2.0 | 3750 | 0.1667 | 0.9447 | 0.9448 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
Kayvane/distilbert-complaints-wandb | 7be5890e6062da85250558168f9eb0255984ff3e | 2022-07-03T21:51:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:consumer-finance-complaints",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Kayvane | null | Kayvane/distilbert-complaints-wandb | 8 | null | transformers | 13,649 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- consumer-finance-complaints
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: distilbert-complaints-wandb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: consumer-finance-complaints
type: consumer-finance-complaints
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.868877906608376
- name: F1
type: f1
value: 0.8630522401242867
- name: Recall
type: recall
value: 0.868877906608376
- name: Precision
type: precision
value: 0.8616053523512515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-complaints-wandb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the consumer-finance-complaints dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4448
- Accuracy: 0.8689
- F1: 0.8631
- Recall: 0.8689
- Precision: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.571 | 0.51 | 2000 | 0.5150 | 0.8469 | 0.8349 | 0.8469 | 0.8249 |
| 0.4765 | 1.01 | 4000 | 0.4676 | 0.8561 | 0.8451 | 0.8561 | 0.8376 |
| 0.3376 | 1.52 | 6000 | 0.4560 | 0.8609 | 0.8546 | 0.8609 | 0.8547 |
| 0.268 | 2.03 | 8000 | 0.4399 | 0.8684 | 0.8611 | 0.8684 | 0.8607 |
| 0.2654 | 2.53 | 10000 | 0.4448 | 0.8689 | 0.8631 | 0.8689 | 0.8616 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
plncmm/mdeberta-cowese-base-es | 1f13ae1282c79d7c6e8b46a36cdfdb8fd046e7e5 | 2022-07-04T02:37:23.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | plncmm | null | plncmm/mdeberta-cowese-base-es | 8 | null | transformers | 13,650 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: mdeberta-cowese-base-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-cowese-base-es
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
juridics/bert-base-multilingual-sts | cd718bae8b2eeb7bd79c3d55294c9da81f07b4cd | 2022-07-04T16:01:29.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | juridics | null | juridics/bert-base-multilingual-sts | 8 | null | sentence-transformers | 13,651 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# juridics/bert-base-multilingual-sts-scale
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('juridics/bert-base-multilingual-sts-scale')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('juridics/bert-base-multilingual-sts-scale')
model = AutoModel.from_pretrained('juridics/bert-base-multilingual-sts-scale')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=juridics/bert-base-multilingual-sts-scale)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4985 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 4985,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1496,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-384-8-10 | 5e2188d7d96328690d7811316a84e2606abf1a97 | 2022-07-04T17:10:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-384-8-10 | 8 | null | transformers | 13,652 | Entry not found |
juridics/jurisbert-base-portuguese-sts | a5125569169b3001e6d060cdebfcbb5bea69de8c | 2022-07-04T18:30:34.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | juridics | null | juridics/jurisbert-base-portuguese-sts | 8 | null | sentence-transformers | 13,653 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# juridics/bertlaw-base-portuguese-sts-scale
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('juridics/bertlaw-base-portuguese-sts-scale')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('juridics/bertlaw-base-portuguese-sts-scale')
model = AutoModel.from_pretrained('juridics/bertlaw-base-portuguese-sts-scale')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=juridics/bertlaw-base-portuguese-sts-scale)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2492 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 2492,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 748,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
teven/all_bs160_allneg | 2c9546b2623a308bf559c53f3619df8dd25c6c9c | 2022-07-05T00:14:56.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | teven | null | teven/all_bs160_allneg | 8 | null | sentence-transformers | 13,654 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# teven/all_bs160_allneg
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/all_bs160_allneg')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/all_bs160_allneg)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 780828 with parameters:
```
{'batch_size': 20, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 315504 with parameters:
```
{'batch_size': 20, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 300017 with parameters:
```
{'batch_size': 20, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 2000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
NimaBoscarino/albert-nima | d1f5ba38bc27444d377bc5f96acfb2edf833a609 | 2022-07-05T02:51:22.000Z | [
"pytorch",
"albert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | false | NimaBoscarino | null | NimaBoscarino/albert-nima | 8 | null | sentence-transformers | 13,655 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-albert-small-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-albert-small-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-small-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
sepidmnorozy/sentiment-10Epochs | 5a618f73828122e25eaf401c6568de1370a6e62f | 2022-07-05T21:33:17.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | sepidmnorozy | null | sepidmnorozy/sentiment-10Epochs | 8 | null | transformers | 13,656 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentiment-10Epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-10Epochs
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7030
- Accuracy: 0.8603
- F1: 0.8585
- Precision: 0.8699
- Recall: 0.8473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3645 | 1.0 | 7088 | 0.4315 | 0.8603 | 0.8466 | 0.9386 | 0.7711 |
| 0.374 | 2.0 | 14176 | 0.4015 | 0.8713 | 0.8648 | 0.9105 | 0.8235 |
| 0.3363 | 3.0 | 21264 | 0.4772 | 0.8705 | 0.8615 | 0.9256 | 0.8057 |
| 0.3131 | 4.0 | 28352 | 0.4579 | 0.8702 | 0.8650 | 0.9007 | 0.8321 |
| 0.3097 | 5.0 | 35440 | 0.4160 | 0.8721 | 0.8663 | 0.9069 | 0.8292 |
| 0.2921 | 6.0 | 42528 | 0.4638 | 0.8673 | 0.8630 | 0.8917 | 0.8362 |
| 0.2725 | 7.0 | 49616 | 0.5183 | 0.8654 | 0.8602 | 0.8947 | 0.8283 |
| 0.2481 | 8.0 | 56704 | 0.5846 | 0.8649 | 0.8624 | 0.8787 | 0.8467 |
| 0.192 | 9.0 | 63792 | 0.6481 | 0.8610 | 0.8596 | 0.8680 | 0.8514 |
| 0.1945 | 10.0 | 70880 | 0.7030 | 0.8603 | 0.8585 | 0.8699 | 0.8473 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
akhisreelibra/xlmR-finetuned-pos | ec69b8ebd608c6b7351caf9697a1710319f9e543 | 2022-07-05T14:06:46.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | akhisreelibra | null | akhisreelibra/xlmR-finetuned-pos | 8 | null | transformers | 13,657 | |
ricardo-filho/bert_base_tcm_teste | 1a37cacc22aeee6dc1fd6414d83716c02ac9acb9 | 2022-07-06T23:23:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ricardo-filho | null | ricardo-filho/bert_base_tcm_teste | 8 | null | transformers | 13,658 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert_base_tcm_teste
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_tcm_teste
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Criterio Julgamento Precision: 0.7209
- Criterio Julgamento Recall: 0.8942
- Criterio Julgamento F1: 0.7983
- Criterio Julgamento Number: 104
- Data Sessao Precision: 0.6351
- Data Sessao Recall: 0.8545
- Data Sessao F1: 0.7287
- Data Sessao Number: 55
- Modalidade Licitacao Precision: 0.9224
- Modalidade Licitacao Recall: 0.9596
- Modalidade Licitacao F1: 0.9406
- Modalidade Licitacao Number: 421
- Numero Exercicio Precision: 0.8872
- Numero Exercicio Recall: 0.9351
- Numero Exercicio F1: 0.9105
- Numero Exercicio Number: 185
- Objeto Licitacao Precision: 0.2348
- Objeto Licitacao Recall: 0.4576
- Objeto Licitacao F1: 0.3103
- Objeto Licitacao Number: 59
- Valor Objeto Precision: 0.5424
- Valor Objeto Recall: 0.7805
- Valor Objeto F1: 0.64
- Valor Objeto Number: 41
- Overall Precision: 0.7683
- Overall Recall: 0.8971
- Overall F1: 0.8277
- Overall Accuracy: 0.9948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Objeto Licitacao Precision | Objeto Licitacao Recall | Objeto Licitacao F1 | Objeto Licitacao Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0346 | 0.96 | 2750 | 0.0329 | 0.6154 | 0.8462 | 0.7126 | 104 | 0.5495 | 0.9091 | 0.6849 | 55 | 0.8482 | 0.9287 | 0.8866 | 421 | 0.7438 | 0.9730 | 0.8431 | 185 | 0.0525 | 0.3220 | 0.0903 | 59 | 0.4762 | 0.7317 | 0.5769 | 41 | 0.5565 | 0.8763 | 0.6807 | 0.9880 |
| 0.0309 | 1.92 | 5500 | 0.0322 | 0.6694 | 0.7788 | 0.72 | 104 | 0.5976 | 0.8909 | 0.7153 | 55 | 0.9178 | 0.9549 | 0.9360 | 421 | 0.8211 | 0.8432 | 0.8320 | 185 | 0.15 | 0.2034 | 0.1727 | 59 | 0.2203 | 0.3171 | 0.26 | 41 | 0.7351 | 0.8243 | 0.7771 | 0.9934 |
| 0.0179 | 2.88 | 8250 | 0.0192 | 0.7209 | 0.8942 | 0.7983 | 104 | 0.6351 | 0.8545 | 0.7287 | 55 | 0.9224 | 0.9596 | 0.9406 | 421 | 0.8872 | 0.9351 | 0.9105 | 185 | 0.2348 | 0.4576 | 0.3103 | 59 | 0.5424 | 0.7805 | 0.64 | 41 | 0.7683 | 0.8971 | 0.8277 | 0.9948 |
| 0.0174 | 3.84 | 11000 | 0.0320 | 0.7522 | 0.8173 | 0.7834 | 104 | 0.5741 | 0.5636 | 0.5688 | 55 | 0.8881 | 0.9430 | 0.9147 | 421 | 0.8490 | 0.8811 | 0.8647 | 185 | 0.2436 | 0.3220 | 0.2774 | 59 | 0.5370 | 0.7073 | 0.6105 | 41 | 0.7719 | 0.8370 | 0.8031 | 0.9946 |
| 0.0192 | 4.8 | 13750 | 0.0261 | 0.6744 | 0.8365 | 0.7468 | 104 | 0.6190 | 0.7091 | 0.6610 | 55 | 0.9169 | 0.9430 | 0.9297 | 421 | 0.8404 | 0.8541 | 0.8472 | 185 | 0.2059 | 0.3559 | 0.2609 | 59 | 0.5088 | 0.7073 | 0.5918 | 41 | 0.7521 | 0.8451 | 0.7959 | 0.9949 |
| 0.0158 | 5.76 | 16500 | 0.0250 | 0.6641 | 0.8173 | 0.7328 | 104 | 0.5610 | 0.8364 | 0.6715 | 55 | 0.9199 | 0.9549 | 0.9371 | 421 | 0.9167 | 0.9514 | 0.9337 | 185 | 0.1912 | 0.4407 | 0.2667 | 59 | 0.4828 | 0.6829 | 0.5657 | 41 | 0.7386 | 0.8821 | 0.8040 | 0.9948 |
| 0.0126 | 6.72 | 19250 | 0.0267 | 0.6694 | 0.7981 | 0.7281 | 104 | 0.6386 | 0.9636 | 0.7681 | 55 | 0.8723 | 0.9572 | 0.9128 | 421 | 0.8812 | 0.9622 | 0.9199 | 185 | 0.2180 | 0.4915 | 0.3021 | 59 | 0.5323 | 0.8049 | 0.6408 | 41 | 0.7308 | 0.9006 | 0.8068 | 0.9945 |
| 0.0162 | 7.68 | 22000 | 0.0328 | 0.675 | 0.7788 | 0.7232 | 104 | 0.6604 | 0.6364 | 0.6481 | 55 | 0.9263 | 0.9549 | 0.9404 | 421 | 0.8535 | 0.9135 | 0.8825 | 185 | 0.2471 | 0.3559 | 0.2917 | 59 | 0.5091 | 0.6829 | 0.5833 | 41 | 0.7788 | 0.8509 | 0.8133 | 0.9948 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_aeslc_919213 | c27929bf03c02b16eacbda400ae5c18b8c4f92e7 | 2022-07-07T15:08:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_919213 | 8 | null | transformers | 13,659 | Entry not found |
Aktsvigun/bart-base_aeslc_2930982 | 0d903ed01de870c741f610b5007ce266b2c3bab7 | 2022-07-07T15:10:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_2930982 | 8 | null | transformers | 13,660 | Entry not found |
Aktsvigun/bart-base_aeslc_3449378 | 97ba13d534d0d3e70a3262360391adf96ef18ac9 | 2022-07-07T15:12:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_3449378 | 8 | null | transformers | 13,661 | Entry not found |
Mascariddu8/bert-finetuned-ner-accelerate | a41cadd9411915fe57cc677083349bade672d405 | 2022-07-07T14:58:23.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Mascariddu8 | null | Mascariddu8/bert-finetuned-ner-accelerate | 8 | null | transformers | 13,662 | Entry not found |
swtx/simcse-chinese-roberta-www-ext | 9c669aab5c0a5b2547fc9df9ac6f75bff1fa0397 | 2022-07-08T12:12:38.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"transformers"
]
| feature-extraction | false | swtx | null | swtx/simcse-chinese-roberta-www-ext | 8 | null | transformers | 13,663 | ## swtx SIMCSE RoBERTa WWM Ext Chinese
This model provides simplified Chinese sentence embeddings encoding based on [Simple Contrastive Learning](https://arxiv.org/abs/2104.08821).
The pretrained model(Chinese RoBERTa WWM Ext) is used for token encoding.
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("swtx/simcse-chinese-roberta-wwm-ext")
model = AutoModel.from_pretrained("swtx/simcse-chinese-roberta-wwm-ext")
``` |
jonatasgrosman/exp_w2v2t_fr_vp-it_s924 | 7b1762364372c411ebe24749d3d2fcc60a40c2e2 | 2022-07-09T02:07:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_vp-it_s924 | 8 | null | transformers | 13,664 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-it_s924
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
huggingtweets/bobdylan-elonmusk-moogmusic | 9af1645b2412e807b78d3eb5c42942d60274c3ef | 2022-07-09T05:09:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/bobdylan-elonmusk-moogmusic | 8 | null | transformers | 13,665 | ---
language: en
thumbnail: http://www.huggingtweets.com/bobdylan-elonmusk-moogmusic/1657343271423/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442355893589401600/22Q1iPAj_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/86771494/Satisfied_Moog_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Bob Dylan & DrT</div>
<div style="text-align: center; font-size: 14px;">@bobdylan-elonmusk-moogmusic</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Bob Dylan & DrT.
| Data | Elon Musk | Bob Dylan | DrT |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 678 | 2721 |
| Retweets | 144 | 43 | 1183 |
| Short tweets | 981 | 9 | 243 |
| Tweets kept | 2125 | 626 | 1295 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/334mchd1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bobdylan-elonmusk-moogmusic's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3iruorvp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3iruorvp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bobdylan-elonmusk-moogmusic')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
adamlin/trash_mail_cls_2022 | 73f11dc066aab1a28beabe453d0dea376236a866 | 2022-07-11T04:25:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | adamlin | null | adamlin/trash_mail_cls_2022 | 8 | null | transformers | 13,666 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: trash_mail_cls_2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trash_mail_cls_2022
This model is a fine-tuned version of [hfl/chinese-macbert-base](https://huggingface.co/hfl/chinese-macbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0382
- Accuracy: 0.9937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 80 | 0.1528 | 0.9438 |
| No log | 2.0 | 160 | 0.0808 | 0.9812 |
| No log | 3.0 | 240 | 0.1004 | 0.9563 |
| No log | 4.0 | 320 | 0.0456 | 0.9812 |
| No log | 5.0 | 400 | 0.0541 | 0.9875 |
| No log | 6.0 | 480 | 0.0382 | 0.9937 |
| 0.0949 | 7.0 | 560 | 0.0501 | 0.9937 |
| 0.0949 | 8.0 | 640 | 0.0384 | 0.9937 |
| 0.0949 | 9.0 | 720 | 0.0384 | 0.9812 |
| 0.0949 | 10.0 | 800 | 0.0391 | 0.9875 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu102
- Datasets 2.3.1
- Tokenizers 0.11.6
|
wooihen/distilbert-base-uncased-finetuned-emotion | ed0e99b98024a4b31b7b3307c1fd044ebe79c40a | 2022-07-11T10:28:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | wooihen | null | wooihen/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,667 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.922771245052197
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2146
- Accuracy: 0.9225
- F1: 0.9228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8233 | 1.0 | 250 | 0.3068 | 0.9025 | 0.8995 |
| 0.2394 | 2.0 | 500 | 0.2146 | 0.9225 | 0.9228 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tner/roberta-large-tweetner-random | f12441ef53b5adc04906c685e8b577086ea67a1c | 2022-07-11T11:25:12.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tner | null | tner/roberta-large-tweetner-random | 8 | null | transformers | 13,668 | Entry not found |
skr1125/distilbert-base-uncased-finetuned-emotion | db6887965defcdae4930262f4d452e6baea403f7 | 2022-07-11T20:35:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | skr1125 | null | skr1125/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,669 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9267721491352747
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2253
- Accuracy: 0.927
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8507 | 1.0 | 250 | 0.3406 | 0.899 | 0.8954 |
| 0.2546 | 2.0 | 500 | 0.2253 | 0.927 | 0.9268 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tner/bertweet-large-tweetner-random | 20153ecbe4716d48b04baa7eac989c849af09459 | 2022-07-11T22:51:23.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tner | null | tner/bertweet-large-tweetner-random | 8 | null | transformers | 13,670 | Entry not found |
Evelyn18/legalectra-small-spanish-becasv3-2 | 317652ebcf571ef6f1a39a096f499fe817200d52 | 2022-07-12T04:24:24.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Evelyn18 | null | Evelyn18/legalectra-small-spanish-becasv3-2 | 8 | null | transformers | 13,671 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-2
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.7994 |
| No log | 2.0 | 10 | 5.6445 |
| No log | 3.0 | 15 | 5.5595 |
| No log | 4.0 | 20 | 5.4933 |
| No log | 5.0 | 25 | 5.4248 |
| No log | 6.0 | 30 | 5.3547 |
| No log | 7.0 | 35 | 5.2872 |
| No log | 8.0 | 40 | 5.2187 |
| No log | 9.0 | 45 | 5.1585 |
| No log | 10.0 | 50 | 5.1038 |
| No log | 11.0 | 55 | 5.0451 |
| No log | 12.0 | 60 | 5.0015 |
| No log | 13.0 | 65 | 4.9638 |
| No log | 14.0 | 70 | 4.9350 |
| No log | 15.0 | 75 | 4.9034 |
| No log | 16.0 | 80 | 4.8741 |
| No log | 17.0 | 85 | 4.8496 |
| No log | 18.0 | 90 | 4.8275 |
| No log | 19.0 | 95 | 4.8139 |
| No log | 20.0 | 100 | 4.7878 |
| No log | 21.0 | 105 | 4.7672 |
| No log | 22.0 | 110 | 4.7671 |
| No log | 23.0 | 115 | 4.7611 |
| No log | 24.0 | 120 | 4.7412 |
| No log | 25.0 | 125 | 4.7307 |
| No log | 26.0 | 130 | 4.7232 |
| No log | 27.0 | 135 | 4.7208 |
| No log | 28.0 | 140 | 4.7186 |
| No log | 29.0 | 145 | 4.7158 |
| No log | 30.0 | 150 | 4.7145 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
lysandre/test-dynamic-pipeline | 9df94906999007831601eb36f26c5d77df437484 | 2022-07-12T14:19:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | lysandre | null | lysandre/test-dynamic-pipeline | 8 | null | transformers | 13,672 | Entry not found |
jimacasaet/SalamaThanksFIL2ENv3 | 09ac958a85ba194de22890814eb1805df8df6a8c | 2022-07-13T11:10:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | jimacasaet | null | jimacasaet/SalamaThanksFIL2ENv3 | 8 | null | transformers | 13,673 | ---
license: apache-2.0
---
|
Evelyn18/distilbert-base-uncased-prueba | 11a4f3faf7a5342d32bb2a31295a222180e74691 | 2022-07-13T19:48:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv3",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-prueba | 8 | null | transformers | 13,674 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv3
model-index:
- name: distilbert-base-uncased-prueba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-prueba
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 3.3077 |
| No log | 2.0 | 16 | 3.3077 |
| No log | 3.0 | 24 | 3.3077 |
| No log | 4.0 | 32 | 3.3077 |
| No log | 5.0 | 40 | 3.3077 |
| No log | 6.0 | 48 | 3.3077 |
| No log | 7.0 | 56 | 3.3077 |
| No log | 8.0 | 64 | 3.3077 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pthpth/ViTune | b14a48b012ae64633d225f8f94181d89e8ab3eae | 2022-07-15T07:10:08.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
]
| image-classification | false | pthpth | null | pthpth/ViTune | 8 | null | transformers | 13,675 | Entry not found |
pthpth/ViTFineTuned | 8036da39244f11393d2868e7085c1be8e376bfc1 | 2022-07-15T09:43:28.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | pthpth | null | pthpth/ViTFineTuned | 8 | null | transformers | 13,676 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: ViTFineTuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: KTH-TIPS2-b
type: images
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTFineTuned
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the KTH-TIPS2-b dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0075
- Accuracy: 1.0
## Model description
Transfer learning by fine tuning the Vision Transformer by Google on KTP-TIP2-b dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2859 | 0.99 | 67 | 0.2180 | 0.9784 |
| 0.293 | 1.99 | 134 | 0.3308 | 0.9185 |
| 0.1444 | 2.99 | 201 | 0.1532 | 0.9568 |
| 0.0833 | 3.99 | 268 | 0.0515 | 0.9856 |
| 0.1007 | 4.99 | 335 | 0.0295 | 0.9904 |
| 0.0372 | 5.99 | 402 | 0.0574 | 0.9808 |
| 0.0919 | 6.99 | 469 | 0.0537 | 0.9880 |
| 0.0135 | 7.99 | 536 | 0.0117 | 0.9952 |
| 0.0472 | 8.99 | 603 | 0.0075 | 1.0 |
| 0.0151 | 9.99 | 670 | 0.0048 | 1.0 |
| 0.0052 | 10.99 | 737 | 0.0073 | 0.9976 |
| 0.0109 | 11.99 | 804 | 0.0198 | 0.9952 |
| 0.0033 | 12.99 | 871 | 0.0066 | 0.9976 |
| 0.011 | 13.99 | 938 | 0.0067 | 0.9976 |
| 0.0032 | 14.99 | 1005 | 0.0060 | 0.9976 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jinwooChoi/hjw_small_1 | 9d433e3c7bd391092dc1db64192312ad8b7f4d75 | 2022-07-15T08:09:11.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/hjw_small_1 | 8 | null | transformers | 13,677 | Entry not found |
AlexWortega/T5_potter | 54bd8f1e82e781ddfa714ccd1708e343782f3254 | 2022-07-15T10:31:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | AlexWortega | null | AlexWortega/T5_potter | 8 | null | transformers | 13,678 | Entry not found |
Jinchen/Optimum-Graphcore-Demo | 23b68874945609b45c1effc7c9c7ec10a33171a6 | 2022-07-15T14:48:08.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Jinchen | null | Jinchen/Optimum-Graphcore-Demo | 8 | null | transformers | 13,679 | Entry not found |
Hamzaaa/wav2vec2-base-finetuned-3-eng-greek | d23283fd77a50c3eef37652d20d742228aa0277a | 2022-07-16T10:30:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
]
| audio-classification | false | Hamzaaa | null | Hamzaaa/wav2vec2-base-finetuned-3-eng-greek | 8 | null | transformers | 13,680 | Entry not found |
mrm8488/bloom-1b3-8bit | 03aef4bf1067599dd45b76c4b1188ad724d92178 | 2022-07-17T11:58:29.000Z | [
"pytorch",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"arxiv:2106.09685",
"transformers",
"license:bigscience-bloom-rail-1.0"
]
| text-generation | false | mrm8488 | null | mrm8488/bloom-1b3-8bit | 8 | null | transformers | 13,681 | ---
inference: false
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
pipeline_tag: text-generation
---
### Quantized bigscience/bloom 1B3 with 8-bit weights
Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom-1b3) a ~1 billion parameters language model that you run and fine-tune with less memory.
Here, we also apply [LoRA (Low Rank Adaptation)](https://arxiv.org/abs/2106.09685) to reduce model size.
### How to fine-tune
TBA
### How to use
This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb):
```python
import transformers
import torch
import torch.nn as nn
import torch.nn.functional as F
from bitsandbytes.functional import quantize_blockwise, dequantize_blockwise
from typing import Tuple
from torch.cuda.amp import custom_fwd, custom_bwd
class FrozenBNBLinear(nn.Module):
def __init__(self, weight, absmax, code, bias=None):
assert isinstance(bias, nn.Parameter) or bias is None
super().__init__()
self.out_features, self.in_features = weight.shape
self.register_buffer("weight", weight.requires_grad_(False))
self.register_buffer("absmax", absmax.requires_grad_(False))
self.register_buffer("code", code.requires_grad_(False))
self.adapter = None
self.bias = bias
def forward(self, input):
output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias)
if self.adapter:
output += self.adapter(input)
return output
@classmethod
def from_linear(cls, linear: nn.Linear) -> "FrozenBNBLinear":
weights_int8, state = quantize_blockise_lowmemory(linear.weight)
return cls(weights_int8, *state, linear.bias)
def __repr__(self):
return f"{self.__class__.__name__}({self.in_features}, {self.out_features})"
class DequantizeAndLinear(torch.autograd.Function):
@staticmethod
@custom_fwd
def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
ctx.save_for_backward(input, weights_quantized, absmax, code)
ctx._has_bias = bias is not None
return F.linear(input, weights_deq, bias)
@staticmethod
@custom_bwd
def backward(ctx, grad_output: torch.Tensor):
assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
input, weights_quantized, absmax, code = ctx.saved_tensors
# grad_output: [*batch, out_features]
weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
grad_input = grad_output @ weights_deq
grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
return grad_input, None, None, None, grad_bias
class FrozenBNBEmbedding(nn.Module):
def __init__(self, weight, absmax, code):
super().__init__()
self.num_embeddings, self.embedding_dim = weight.shape
self.register_buffer("weight", weight.requires_grad_(False))
self.register_buffer("absmax", absmax.requires_grad_(False))
self.register_buffer("code", code.requires_grad_(False))
self.adapter = None
def forward(self, input, **kwargs):
with torch.no_grad():
# note: both quantuized weights and input indices are *not* differentiable
weight_deq = dequantize_blockwise(self.weight, absmax=self.absmax, code=self.code)
output = F.embedding(input, weight_deq, **kwargs)
if self.adapter:
output += self.adapter(input)
return output
@classmethod
def from_embedding(cls, embedding: nn.Embedding) -> "FrozenBNBEmbedding":
weights_int8, state = quantize_blockise_lowmemory(embedding.weight)
return cls(weights_int8, *state)
def __repr__(self):
return f"{self.__class__.__name__}({self.num_embeddings}, {self.embedding_dim})"
def quantize_blockise_lowmemory(matrix: torch.Tensor, chunk_size: int = 2 ** 20):
assert chunk_size % 4096 == 0
code = None
chunks = []
absmaxes = []
flat_tensor = matrix.view(-1)
for i in range((matrix.numel() - 1) // chunk_size + 1):
input_chunk = flat_tensor[i * chunk_size: (i + 1) * chunk_size].clone()
quantized_chunk, (absmax_chunk, code) = quantize_blockwise(input_chunk, code=code)
chunks.append(quantized_chunk)
absmaxes.append(absmax_chunk)
matrix_i8 = torch.cat(chunks).reshape_as(matrix)
absmax = torch.cat(absmaxes)
return matrix_i8, (absmax, code)
def convert_to_int8(model):
"""Convert linear and embedding modules to 8-bit with optional adapters"""
for module in list(model.modules()):
for name, child in module.named_children():
if isinstance(child, nn.Linear):
print(name, child)
setattr(
module,
name,
FrozenBNBLinear(
weight=torch.zeros(child.out_features, child.in_features, dtype=torch.uint8),
absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1),
code=torch.zeros(256),
bias=child.bias,
),
)
elif isinstance(child, nn.Embedding):
setattr(
module,
name,
FrozenBNBEmbedding(
weight=torch.zeros(child.num_embeddings, child.embedding_dim, dtype=torch.uint8),
absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1),
code=torch.zeros(256),
)
)
class BloomBlock(transformers.models.bloom.modeling_bloom.BloomBlock):
def __init__(self, config, layer_number=None):
super().__init__(config, layer_number)
convert_to_int8(self.self_attention)
convert_to_int8(self.mlp)
class BloomModel(transformers.models.bloom.modeling_bloom.BloomModel):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
class BloomForCausalLM(transformers.models.bloom.modeling_bloom.BloomForCausalLM):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
transformers.models.bloom.modeling_bloom.BloomBlock = BloomBlock
model_name = 'mrm8488/bloom-1b3-8bit'
model = BloomForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True)
tokenizer = BloomTokenizerFast.from_pretrained(model_name)
prompt = tokenizer("Given a table named salaries and columns id, created_at, salary, age. Creates a SQL to answer What is the average salary for 22 years old:", return_tensors='pt')
out = model.generate(**prompt, min_length=10, do_sample=True)
tokenizer.decode(out[0])
``` |
jinwooChoi/hjw_small2 | 4e187bca58cb3a398770279133e0c05ce729d8d4 | 2022-07-18T08:44:46.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/hjw_small2 | 8 | null | transformers | 13,682 | Entry not found |
huggingtweets/repmtg | 97c49e5b2e4ae92578f2a89e7d3039599ff1d98e | 2022-07-18T23:59:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/repmtg | 8 | null | transformers | 13,683 | ---
language: en
thumbnail: http://www.huggingtweets.com/repmtg/1658188604932/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522919169599184896/CVPC3b3M_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rep. Marjorie Taylor Greene🇺🇸</div>
<div style="text-align: center; font-size: 14px;">@repmtg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rep. Marjorie Taylor Greene🇺🇸.
| Data | Rep. Marjorie Taylor Greene🇺🇸 |
| --- | --- |
| Tweets downloaded | 1806 |
| Retweets | 230 |
| Short tweets | 114 |
| Tweets kept | 1462 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1shyu2gl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @repmtg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ald5krkg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ald5krkg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/repmtg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jinwooChoi/hjw_small3 | 556d464d1528511c19f506665044927ff5688ec6 | 2022-07-20T06:18:30.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/hjw_small3 | 8 | null | transformers | 13,684 | Entry not found |
furrutiav/beto_question_type | 8c6b84aa370aeca89f194668ea292970f4288365 | 2022-07-21T18:58:03.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | furrutiav | null | furrutiav/beto_question_type | 8 | 1 | transformers | 13,685 | Entry not found |
steven123/Check_Aligned_Teeth | 2d01acc8d67f432f93eae967c028e5cc88c8cada | 2022-07-20T00:59:05.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | steven123 | null | steven123/Check_Aligned_Teeth | 8 | null | transformers | 13,686 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Check_Aligned_Teeth
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9473684430122375
---
# Check_Aligned_Teeth
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Aligned Teeth

#### Crooked Teeth
 |
jinwooChoi/SKKU_AP_SA_KES_trained2 | ae99901a9b43da8942799c03f36fe36a38d31019 | 2022-07-21T04:11:50.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_KES_trained2 | 8 | null | transformers | 13,687 | Entry not found |
ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601 | e22acac10f854c77b4f2386a77330021d0fcc7f5 | 2022-07-20T10:12:11.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:ar2rpapian/autotrain-data-Flexport_Classification_Desc",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | ar2rpapian | null | ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601 | 8 | null | transformers | 13,688 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ar2rpapian/autotrain-data-Flexport_Classification_Desc
co2_eq_emissions: 206.60369255723003
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1155542601
- CO2 Emissions (in grams): 206.60369255723003
## Validation Metrics
- Loss: 0.22105568647384644
- Accuracy: 0.9578838092484789
- Macro F1: 0.9360695960738429
- Micro F1: 0.9578838092484788
- Weighted F1: 0.957863360811612
- Macro Precision: 0.9415730549729362
- Micro Precision: 0.9578838092484789
- Weighted Precision: 0.9586754512711492
- Macro Recall: 0.9329742157218464
- Micro Recall: 0.9578838092484789
- Weighted Recall: 0.9578838092484789
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
jordyvl/biobert-base-cased-v1.2_ncbi_disease-lowC-CRF-first-ner | e16228d628f07f4bdb0a569ed29c9452e9de6b2b | 2022-07-20T09:06:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"transformers"
]
| null | false | jordyvl | null | jordyvl/biobert-base-cased-v1.2_ncbi_disease-lowC-CRF-first-ner | 8 | null | transformers | 13,689 | Entry not found |
ardauzunoglu/ConvBERTurk-NLI | c75e15a9bc9118df415b17b835bf0e3697c2226c | 2022-07-20T19:55:47.000Z | [
"pytorch",
"convbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | ardauzunoglu | null | ardauzunoglu/ConvBERTurk-NLI | 8 | null | sentence-transformers | 13,690 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ardauzunoglu/ConvBERTurk-NLI
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ardauzunoglu/ConvBERTurk-NLI')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ardauzunoglu/ConvBERTurk-NLI)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 34385 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: ConvBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Billwzl/20split_dataset_version1 | 7025069f365f1e0ae2764811b25516544679e99c | 2022-07-24T20:49:31.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Billwzl | null | Billwzl/20split_dataset_version1 | 8 | null | transformers | 13,691 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20split_dataset_version1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20split_dataset_version1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.7475 | 1.0 | 11851 | 2.5194 |
| 2.5528 | 2.0 | 23702 | 2.4191 |
| 2.4649 | 3.0 | 35553 | 2.3646 |
| 2.4038 | 4.0 | 47404 | 2.3289 |
| 2.3632 | 5.0 | 59255 | 2.2922 |
| 2.3273 | 6.0 | 71106 | 2.2739 |
| 2.2964 | 7.0 | 82957 | 2.2494 |
| 2.2732 | 8.0 | 94808 | 2.2217 |
| 2.2526 | 9.0 | 106659 | 2.2149 |
| 2.2369 | 10.0 | 118510 | 2.2029 |
| 2.222 | 11.0 | 130361 | 2.2020 |
| 2.2135 | 12.0 | 142212 | 2.1942 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ASCCCCCCCC/PENGMENGJIE-finetuned-mix_info | d2f65fbb5853ab8e632a93388b56808674fd0bcd | 2022-07-22T05:50:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ASCCCCCCCC | null | ASCCCCCCCC/PENGMENGJIE-finetuned-mix_info | 8 | null | transformers | 13,692 | Entry not found |
jegormeister/mmarco-mMiniLMv2-L12-H384-v1-pruned | 3f21480bc346689730a97451d0d8bd7e698fd572 | 2022-07-22T08:21:26.000Z | [
"pytorch"
]
| null | false | jegormeister | null | jegormeister/mmarco-mMiniLMv2-L12-H384-v1-pruned | 8 | null | null | 13,693 | Entry not found |
abdulmatinomotoso/combined_headline_generator | 0545bcb2a70cf06f021bd5fdbdbc2eadace8abba | 2022-07-22T21:39:39.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | abdulmatinomotoso | null | abdulmatinomotoso/combined_headline_generator | 8 | null | transformers | 13,694 | ---
tags:
- generated_from_trainer
model-index:
- name: combined_headline_generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# combined_headline_generator
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.5723 | 0.96 | 300 | 3.2719 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Evelyn18/roberta-base-spanish-squades-modelo1 | ccbb55eed032d11be0b18d0805bda6b8cbd91577 | 2022-07-22T23:02:37.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-modelo1 | 8 | null | transformers | 13,695 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-modelo1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-modelo1
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 2.7892 |
| No log | 2.0 | 12 | 3.7037 |
| No log | 3.0 | 18 | 5.1221 |
| No log | 4.0 | 24 | 4.5988 |
| No log | 5.0 | 30 | 5.9202 |
| No log | 6.0 | 36 | 5.0345 |
| No log | 7.0 | 42 | 4.4421 |
| No log | 8.0 | 48 | 4.6969 |
| No log | 9.0 | 54 | 5.2084 |
| No log | 10.0 | 60 | 5.7001 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ahmed007/bart-large-cnn-ibn-Shaddad-v1 | 117f1187c37cef7526839cb7b0a9c0b75820e390 | 2022-07-23T10:01:13.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"Poet",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Ahmed007 | null | Ahmed007/bart-large-cnn-ibn-Shaddad-v1 | 8 | null | transformers | 13,696 | ---
license: mit
tags:
- Poet
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-ibn-Shaddad-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-ibn-Shaddad-v1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9162
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.0752 | 1.0 | 569 | 1.3579 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8769 | 2.0 | 1138 | 1.3172 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7833 | 3.0 | 1707 | 0.9982 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.707 | 4.0 | 2276 | 0.9162 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-large-finetuned-synthetic-multi-class | a9b9dabd745824dcffbdac02a1237f69f553f72e | 2022-07-24T02:51:13.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-finetuned-synthetic-multi-class | 8 | null | transformers | 13,697 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: deberta-v3-large-finetuned-synthetic-multi-class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-synthetic-multi-class
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0223
- F1: 0.9961
- Precision: 0.9961
- Recall: 0.9961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:|
| 0.0278 | 1.0 | 10953 | 0.0352 | 0.9936 | 0.9935 | 0.9936 |
| 0.0143 | 2.0 | 21906 | 0.0252 | 0.9952 | 0.9952 | 0.9953 |
| 0.0014 | 3.0 | 32859 | 0.0267 | 0.9955 | 0.9955 | 0.9955 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ydmeira/segformer-b0-finetuned-pokemon | 0a6e016902e777a11ba729f38d32ae03fb837a0d | 2022-07-25T13:53:55.000Z | [
"pytorch",
"segformer",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| null | false | ydmeira | null | ydmeira/segformer-b0-finetuned-pokemon | 8 | null | transformers | 13,698 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-pokemon
This model is a fine-tuned version of [ydmeira/segformer-b0-finetuned-pokemon](https://huggingface.co/ydmeira/segformer-b0-finetuned-pokemon) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0157
- Mean Iou: 0.4970
- Mean Accuracy: 0.9940
- Overall Accuracy: 0.9940
- Per Category Iou: [0.0, 0.9940101727137823]
- Per Category Accuracy: [nan, 0.9940101727137823]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------:|:-------------------------:|
| 0.0175 | 45.0 | 1305 | 0.0157 | 0.4971 | 0.9943 | 0.9943 | [0.0, 0.9942906494536522] | [nan, 0.9942906494536522] |
| 0.018 | 46.0 | 1334 | 0.0157 | 0.4968 | 0.9936 | 0.9936 | [0.0, 0.9936369941650801] | [nan, 0.9936369941650801] |
| 0.0185 | 47.0 | 1363 | 0.0157 | 0.4971 | 0.9943 | 0.9943 | [0.0, 0.9942791789145462] | [nan, 0.9942791789145462] |
| 0.018 | 48.0 | 1392 | 0.0157 | 0.4969 | 0.9937 | 0.9937 | [0.0, 0.9937245121725857] | [nan, 0.9937245121725857] |
| 0.0183 | 49.0 | 1421 | 0.0157 | 0.4969 | 0.9939 | 0.9939 | [0.0, 0.9938530594161242] | [nan, 0.9938530594161242] |
| 0.0196 | 50.0 | 1450 | 0.0157 | 0.4970 | 0.9940 | 0.9940 | [0.0, 0.9940101727137823] | [nan, 0.9940101727137823] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
akshatpandeyme/DialoGPT-small-parthiv | e69693357691e448cd97baeb812d1f04de19f995 | 2022-07-25T10:43:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | akshatpandeyme | null | akshatpandeyme/DialoGPT-small-parthiv | 8 | null | transformers | 13,699 | ---
tags:
- conversational
---
# parthiv DialoGPT Model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.