modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jeevesh8/6ep_bert_ft_cola-47 | 869c7411aadd8fe9bc8fb4a51da1411d2aa85da7 | 2022-05-14T13:17:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-47 | 4 | null | transformers | 19,700 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-48 | 3e60972407f958dc6b34ffa95325adfb6c62d926 | 2022-05-14T13:19:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-48 | 4 | null | transformers | 19,701 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-51 | 2f1f287c71c7b34c2c0669d7241f7c84d6868b55 | 2022-05-14T13:24:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-51 | 4 | null | transformers | 19,702 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-56 | e602531c2e9642a2495397ee1e14f939f95054bd | 2022-05-14T13:32:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-56 | 4 | null | transformers | 19,703 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-57 | dd3abf5ef577d6e6d1cbd7023831669f715cceba | 2022-05-14T13:34:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-57 | 4 | null | transformers | 19,704 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-58 | efdbb057230a52256c8de9985aaece44b1574892 | 2022-05-14T13:35:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-58 | 4 | null | transformers | 19,705 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-60 | d5df941b78163340e3ccc1f7ca28cd25aaef456b | 2022-05-14T13:39:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-60 | 4 | null | transformers | 19,706 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-64 | 040242743e7c8a9acd8e4beaa6610078286304e0 | 2022-05-14T13:45:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-64 | 4 | null | transformers | 19,707 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-65 | 60634ccb69ccdb6d81ab9951a0b7707ee3542d5f | 2022-05-14T13:47:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-65 | 4 | null | transformers | 19,708 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-67 | a4fecd367aa56a2ba91c452da970795b6b8e989b | 2022-05-14T13:50:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-67 | 4 | null | transformers | 19,709 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-68 | 879631772ff9032d795969669a5ab744b7b70435 | 2022-05-14T13:52:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-68 | 4 | null | transformers | 19,710 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-70 | a158b10f60ba33207eb018758a09015ea0cf94cc | 2022-05-14T13:55:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-70 | 4 | null | transformers | 19,711 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-72 | 692fa7fc8fd5bc5232042a87816d7b6f8ca643fe | 2022-05-14T13:59:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-72 | 4 | null | transformers | 19,712 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-74 | e8ce7d6cac38103fac7794c167e6ac0bb3c64525 | 2022-05-14T14:02:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-74 | 4 | null | transformers | 19,713 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-78 | e0a9a1c42ab16a2dfddc073e2a36d33d90fca7b9 | 2022-05-14T14:09:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-78 | 4 | null | transformers | 19,714 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-79 | 2d667b5b656efc6d89d52439292140e611fd06df | 2022-05-14T14:10:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-79 | 4 | null | transformers | 19,715 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-82 | 64d99c3dbe1686024c4a28e9a9ba6c3b09fe7d58 | 2022-05-14T14:15:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-82 | 4 | null | transformers | 19,716 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-83 | 018970d277c7421ad7041aa8ec3c15a1a285eaa1 | 2022-05-14T14:17:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-83 | 4 | null | transformers | 19,717 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-84 | 78b9c423221758a7c84f64b4f5d418520358c1c9 | 2022-05-14T14:19:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-84 | 4 | null | transformers | 19,718 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-85 | 830a699d328c462bd1e0187e39b519ebead955ac | 2022-05-14T14:20:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-85 | 4 | null | transformers | 19,719 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-86 | 473a5fcea1f60446302667ab323f96741676c362 | 2022-05-14T14:22:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-86 | 4 | null | transformers | 19,720 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-88 | 8bc46cd193597113f16dd2303b86ffff9106b9ff | 2022-05-14T14:25:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-88 | 4 | null | transformers | 19,721 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-89 | 71928fd8c402abf713da3886b1456d3d204c79c7 | 2022-05-14T14:27:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-89 | 4 | null | transformers | 19,722 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-90 | 82cd62bf28b076f9e5576fb3305c521150e59d68 | 2022-05-14T14:29:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-90 | 4 | null | transformers | 19,723 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-91 | ca9b7e906c0320a7ba64cd9ce91ebc9939cf7478 | 2022-05-14T14:30:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-91 | 4 | null | transformers | 19,724 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-92 | d4469bfcade5f576fe5622224e8e0d70957ff820 | 2022-05-14T14:32:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-92 | 4 | null | transformers | 19,725 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-93 | dd424199cae1abc6ea1db56115d659350307103f | 2022-05-14T14:34:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-93 | 4 | null | transformers | 19,726 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-94 | c62027a1b728546a3fca04e1ea096328a2a57726 | 2022-05-14T14:36:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-94 | 4 | null | transformers | 19,727 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-95 | ceb1374f8002ac4b515a62d6e3be65d21b386f04 | 2022-05-14T14:37:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-95 | 4 | null | transformers | 19,728 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-96 | 0625e319b6eeeae39dd3885a92dc201ba5a8410e | 2022-05-14T14:39:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-96 | 4 | null | transformers | 19,729 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-97 | 63d334bb1acc00724d4a76f69ace8be7de8c217a | 2022-05-14T14:41:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-97 | 4 | null | transformers | 19,730 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-98 | f121a01774268dbb39699131fc11ac7430e623eb | 2022-05-14T14:42:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-98 | 4 | null | transformers | 19,731 | Entry not found |
Jeevesh8/6ep_bert_ft_cola-99 | 7dc2fcaca04913056c2132770ae5f5192d26dbb4 | 2022-05-14T14:44:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-99 | 4 | null | transformers | 19,732 | Entry not found |
PrajwalS/wav2vec2_train_large | 26c2e1c9394b73ca080aa1e1bf51285e02973ac1 | 2022-05-15T11:20:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | PrajwalS | null | PrajwalS/wav2vec2_train_large | 4 | 0 | transformers | 19,733 | Entry not found |
jtang9001/skynet_gpt2_1 | 5e2439f54c80fa2400e76107509df8a3872a6510 | 2022-05-15T00:33:31.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | jtang9001 | null | jtang9001/skynet_gpt2_1 | 4 | null | transformers | 19,734 | Entry not found |
jtang9001/skynet_gpt2_2 | 23b69a201d7e322d5f3ffd7231bc2af697252470 | 2022-05-15T01:43:00.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | jtang9001 | null | jtang9001/skynet_gpt2_2 | 4 | null | transformers | 19,735 | Entry not found |
Barik/testvata | 511a5734b423b0a8a4b005cb3549e0f5ad92c800 | 2022-05-15T09:21:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Barik | null | Barik/testvata | 4 | null | transformers | 19,736 | Entry not found |
Zohar/distilgpt2-finetuned-negative-restaurant-reviews-clean | 7fe2d63456526af2d67ef74e4bd4cb264eae851d | 2022-05-15T14:12:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Zohar | null | Zohar/distilgpt2-finetuned-negative-restaurant-reviews-clean | 4 | null | transformers | 19,737 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-negative-restaurant-reviews-clean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-negative-restaurant-reviews-clean
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6841 | 1.0 | 3105 | 3.5793 |
| 3.6184 | 2.0 | 6210 | 3.5313 |
| 3.5943 | 3.0 | 9315 | 3.5187 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.11.0
|
ali-issa/FYP_ARABIC | bf791128ff70203d72011188a69431da73796e28 | 2022-05-15T19:44:11.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ali-issa | null | ali-issa/FYP_ARABIC | 4 | null | transformers | 19,738 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-arabic-gpu-colab-similar-to-german-bigger-warm-up
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-arabic-gpu-colab-similar-to-german-bigger-warm-up
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6370
- Wer: 0.4146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.4958 | 2.83 | 400 | 3.4822 | 1.0 |
| 3.2281 | 5.67 | 800 | 2.9404 | 1.0 |
| 2.942 | 8.51 | 1200 | 2.8690 | 1.0 |
| 2.6346 | 11.35 | 1600 | 1.5452 | 0.9994 |
| 1.3472 | 14.18 | 2000 | 0.8261 | 0.6853 |
| 0.8972 | 17.02 | 2400 | 0.6812 | 0.5737 |
| 0.6924 | 19.85 | 2800 | 0.6552 | 0.5291 |
| 0.5687 | 22.69 | 3200 | 0.6108 | 0.4909 |
| 0.4734 | 25.53 | 3600 | 0.5877 | 0.4674 |
| 0.4029 | 28.37 | 4000 | 0.6204 | 0.4662 |
| 0.3483 | 31.2 | 4400 | 0.5932 | 0.4451 |
| 0.307 | 34.04 | 4800 | 0.6445 | 0.4392 |
| 0.2722 | 36.88 | 5200 | 0.6126 | 0.4292 |
| 0.2247 | 39.71 | 5600 | 0.6370 | 0.4146 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
prashanth/mbart-large-cc25-ge-hi-to-en | f086bbfc6b5dab16c158663a7c51afe3b63de4c7 | 2022-05-16T13:47:51.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"dataset:hindi_english_machine_translation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | prashanth | null | prashanth/mbart-large-cc25-ge-hi-to-en | 4 | null | transformers | 19,739 | ---
tags:
- generated_from_trainer
datasets:
- hindi_english_machine_translation
metrics:
- bleu
model-index:
- name: mbart-large-cc25-ge-hi-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: hindi_english_machine_translation
type: hindi_english_machine_translation
args: hi-en
metrics:
- name: Bleu
type: bleu
value: 0.1823
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-ge-hi-to-en
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1000
- Bleu: 0.1823
- Gen Len: 1023.383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:--------:|
| 1.4078 | 1.0 | 135739 | 1.1000 | 0.1823 | 1023.383 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 1.18.0
- Tokenizers 0.12.1
|
PrajwalS/wav2vec2_train_large_on_untrained | 6bf4b1cdc5af0c4f89ea9e7cfa42aa18ababc442 | 2022-05-16T04:33:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | PrajwalS | null | PrajwalS/wav2vec2_train_large_on_untrained | 4 | null | transformers | 19,740 | Entry not found |
Gnosky/distilgpt2-finetuned-wikitext2 | 7466ba6462e3bcba60eccaa7b861ce9ec0d8fecf | 2022-05-16T04:48:55.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Gnosky | null | Gnosky/distilgpt2-finetuned-wikitext2 | 4 | null | transformers | 19,741 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
yogeshchandrasekharuni/bart-paraphrase-finetuned-xsum-v2 | cf0620b6bef71726dea0de46b6722963488544cf | 2022-05-16T05:52:18.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yogeshchandrasekharuni | null | yogeshchandrasekharuni/bart-paraphrase-finetuned-xsum-v2 | 4 | null | transformers | 19,742 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-paraphrase-finetuned-xsum-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-finetuned-xsum-v2
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2329
- Rouge1: 100.0
- Rouge2: 100.0
- Rougel: 100.0
- Rougelsum: 100.0
- Gen Len: 9.2619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 21 | 1.2954 | 66.7012 | 60.8612 | 66.5163 | 66.4352 | 13.2857 |
| No log | 2.0 | 42 | 0.6866 | 86.8284 | 82.7835 | 86.7208 | 86.784 | 9.5238 |
| No log | 3.0 | 63 | 0.4652 | 95.1892 | 93.5619 | 95.2567 | 95.1657 | 10.3095 |
| No log | 4.0 | 84 | 0.4280 | 97.7463 | 97.1782 | 97.8708 | 97.718 | 9.5 |
| No log | 5.0 | 105 | 0.3712 | 99.6435 | 99.5767 | 99.6435 | 99.6435 | 9.3571 |
| No log | 6.0 | 126 | 0.4451 | 99.2695 | 98.9418 | 99.1883 | 99.3506 | 9.3095 |
| No log | 7.0 | 147 | 0.3169 | 99.246 | 99.0232 | 99.246 | 99.4048 | 9.619 |
| No log | 8.0 | 168 | 0.2942 | 100.0 | 100.0 | 100.0 | 100.0 | 9.4048 |
| No log | 9.0 | 189 | 0.3105 | 100.0 | 100.0 | 100.0 | 100.0 | 9.1667 |
| No log | 10.0 | 210 | 0.3035 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2619 |
| No log | 11.0 | 231 | 0.2983 | 100.0 | 100.0 | 100.0 | 100.0 | 10.5714 |
| No log | 12.0 | 252 | 0.2497 | 100.0 | 100.0 | 100.0 | 100.0 | 9.4286 |
| No log | 13.0 | 273 | 0.2911 | 100.0 | 100.0 | 100.0 | 100.0 | 9.1667 |
| No log | 14.0 | 294 | 0.2619 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2143 |
| No log | 15.0 | 315 | 0.2510 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2381 |
| No log | 16.0 | 336 | 0.2647 | 100.0 | 100.0 | 100.0 | 100.0 | 9.9048 |
| No log | 17.0 | 357 | 0.2438 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2143 |
| No log | 18.0 | 378 | 0.2324 | 100.0 | 100.0 | 100.0 | 100.0 | 9.3095 |
| No log | 19.0 | 399 | 0.2296 | 100.0 | 100.0 | 100.0 | 100.0 | 9.3095 |
| No log | 20.0 | 420 | 0.2329 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2619 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Yarn007/autotrain-Napkin-872827783 | 75576b61d6b26a9a51d7990bc2fbae5da444f5e2 | 2022-05-16T13:01:19.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:Yarn007/autotrain-data-Napkin",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Yarn007 | null | Yarn007/autotrain-Napkin-872827783 | 4 | null | transformers | 19,743 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Yarn007/autotrain-data-Napkin
co2_eq_emissions: 0.020162211418903533
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 872827783
- CO2 Emissions (in grams): 0.020162211418903533
## Validation Metrics
- Loss: 0.25198695063591003
- Accuracy: 0.9325714285714286
- Macro F1: 0.9254931094274171
- Micro F1: 0.9325714285714286
- Weighted F1: 0.9323540959391766
- Macro Precision: 0.9286720054236212
- Micro Precision: 0.9325714285714286
- Weighted Precision: 0.9324375609546055
- Macro Recall: 0.9227549386201338
- Micro Recall: 0.9325714285714286
- Weighted Recall: 0.9325714285714286
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Yarn007/autotrain-Napkin-872827783
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Yarn007/autotrain-Napkin-872827783", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Yarn007/autotrain-Napkin-872827783", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
drGOD/rubert-tiny-finetuned-cola | 06be372f3059a15f38cadf14ceee037938c695c1 | 2022-05-17T14:44:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | drGOD | null | drGOD/rubert-tiny-finetuned-cola | 4 | null | transformers | 19,744 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: rubert-tiny-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny-finetuned-cola
This model is a fine-tuned version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
- Matthews Correlation: 0.9994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.0640317288646484e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.0326 | 1.0 | 2667 | 0.0180 | 0.9907 |
| 0.0143 | 2.0 | 5334 | 0.0075 | 0.9957 |
| 0.0102 | 3.0 | 8001 | 0.0049 | 0.9979 |
| 0.0026 | 4.0 | 10668 | 0.0019 | 0.9993 |
| 0.0018 | 5.0 | 13335 | 0.0013 | 0.9994 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Danni/distilbert-base-uncased-finetuned-dbpedia-label | 8cfb18fa125a59bfa7c05e1fb2bcd5da4619ffed | 2022-05-16T15:16:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Danni | null | Danni/distilbert-base-uncased-finetuned-dbpedia-label | 4 | null | transformers | 19,745 | Entry not found |
Caesarcc/bertimbau-finetune-br-news | dfdd54b38d617e996c9691d3accaf65e0708f5be | 2022-05-17T02:30:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | Caesarcc | null | Caesarcc/bertimbau-finetune-br-news | 4 | null | transformers | 19,746 | ---
license: mit
---
|
anuj55/roberta-base-squad2-finetuned-polifact | b7d75e3ac265463cbedd378c836fa4930174c8f3 | 2022-05-17T09:52:11.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | anuj55 | null | anuj55/roberta-base-squad2-finetuned-polifact | 4 | null | transformers | 19,747 | Entry not found |
dog/resnet50 | 27831fb05939a2c5e6c80b27c0cdfac60ebc45ba | 2022-05-17T08:58:48.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | dog | null | dog/resnet50 | 4 | null | timm | 19,748 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for dog/resnet50 |
huggingtweets/gduvivier-guilhermeboulos-ptbrasil | 42d6dfa8f37b7a303ad015b71ef702e784375b3c | 2022-05-17T17:55:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/gduvivier-guilhermeboulos-ptbrasil | 4 | null | transformers | 19,749 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410721079383969795/28HNul1J_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/936390568946651136/mFZ9oOfR_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1221967496640704512/3lOox3Kt_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">PT Brasil & Gregorio Duvivier & Guilherme Boulos</div>
<div style="text-align: center; font-size: 14px;">@gduvivier-guilhermeboulos-ptbrasil</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from PT Brasil & Gregorio Duvivier & Guilherme Boulos.
| Data | PT Brasil | Gregorio Duvivier | Guilherme Boulos |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3223 | 3248 |
| Retweets | 535 | 1358 | 657 |
| Short tweets | 116 | 450 | 122 |
| Tweets kept | 2599 | 1415 | 2469 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dcswedc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gduvivier-guilhermeboulos-ptbrasil's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/202hdnnd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/202hdnnd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gduvivier-guilhermeboulos-ptbrasil')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_42 | c37d95e3d8ee1e156de64c4922c58071e3321024 | 2022-05-17T19:57:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.exclusive.seed_42 | 4 | null | transformers | 19,750 | Entry not found |
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_66 | dfc3afb9b67309860269b478d161c421ff8ed6c6 | 2022-05-17T20:09:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.exclusive.seed_66 | 4 | null | transformers | 19,751 | Entry not found |
CEBaB/lstm.CEBaB.absa.exclusive.seed_66 | d57081daa1cd3ef4ee516f28914aed05bfce784e | 2022-05-17T20:19:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.exclusive.seed_66 | 4 | null | transformers | 19,752 | Entry not found |
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_77 | a227c773ace3e2c4a0bdb5158236d1a03cfba386 | 2022-05-17T20:20:51.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.exclusive.seed_77 | 4 | null | transformers | 19,753 | Entry not found |
CEBaB/lstm.CEBaB.absa.exclusive.seed_77 | 389e967794e6a0e6bb40adaa965c8ab91563d665 | 2022-05-17T20:31:55.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.exclusive.seed_77 | 4 | null | transformers | 19,754 | Entry not found |
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_88 | 15db1c406b9e4f974c48560cdfda990a9869fd30 | 2022-05-17T20:32:43.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.exclusive.seed_88 | 4 | null | transformers | 19,755 | Entry not found |
CEBaB/lstm.CEBaB.absa.exclusive.seed_88 | c96d9b66e5a655de86408a19b02b622701d0ad91 | 2022-05-17T20:43:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.exclusive.seed_88 | 4 | null | transformers | 19,756 | Entry not found |
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_99 | 284b401e6bb5d9129b95aed0ab35fcd15ad8bbeb | 2022-05-17T20:44:19.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.exclusive.seed_99 | 4 | null | transformers | 19,757 | Entry not found |
CEBaB/lstm.CEBaB.absa.exclusive.seed_99 | 206ef31889240b4323fcd6540a1860681dd9ce17 | 2022-05-17T20:55:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.exclusive.seed_99 | 4 | null | transformers | 19,758 | Entry not found |
CEBaB/lstm.CEBaB.absa.inclusive.seed_42 | e7f2524747bc4027bc89b733bf39edfea9bd9d80 | 2022-05-17T23:52:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.inclusive.seed_42 | 4 | null | transformers | 19,759 | Entry not found |
CEBaB/lstm.CEBaB.absa.inclusive.seed_66 | 2c178ccca493b40e5410548843987780a2842c48 | 2022-05-18T00:09:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.inclusive.seed_66 | 4 | null | transformers | 19,760 | Entry not found |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_77 | 46a5f71f1caf4ee9de8564148254b2cfb64c8696 | 2022-05-18T00:14:56.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.inclusive.seed_77 | 4 | null | transformers | 19,761 | Entry not found |
CEBaB/lstm.CEBaB.absa.inclusive.seed_77 | 0f06e15c2288653af00bd2f0f7db92dea9d44800 | 2022-05-18T00:26:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.inclusive.seed_77 | 4 | null | transformers | 19,762 | Entry not found |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_99 | c6b143fde76c91393ef51bb9be90587ee7dc4cf4 | 2022-05-18T00:49:41.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.inclusive.seed_99 | 4 | null | transformers | 19,763 | Entry not found |
birgermoell/wav2vec2-liepa-1-percent | 7f5d5e3192afdc4876e1e41c8b814887550947d1 | 2022-05-18T10:54:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/wav2vec2-liepa-1-percent | 4 | null | transformers | 19,764 | ---
language:
- lt
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-liepa-1-percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-liepa-1-percent
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - LT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5774
- Wer: 0.5079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.23 | 100 | 3.3596 | 1.0 |
| No log | 0.46 | 200 | 2.9280 | 1.0 |
| No log | 0.69 | 300 | 1.5091 | 0.9650 |
| No log | 0.93 | 400 | 0.9943 | 0.9177 |
| 3.1184 | 1.16 | 500 | 0.7590 | 0.7793 |
| 3.1184 | 1.39 | 600 | 0.7336 | 0.7408 |
| 3.1184 | 1.62 | 700 | 0.7040 | 0.7618 |
| 3.1184 | 1.85 | 800 | 0.6815 | 0.7233 |
| 3.1184 | 2.08 | 900 | 0.6457 | 0.6865 |
| 0.7917 | 2.31 | 1000 | 0.5705 | 0.6813 |
| 0.7917 | 2.55 | 1100 | 0.5708 | 0.6620 |
| 0.7917 | 2.78 | 1200 | 0.5888 | 0.6462 |
| 0.7917 | 3.01 | 1300 | 0.6509 | 0.6970 |
| 0.7917 | 3.24 | 1400 | 0.5871 | 0.6462 |
| 0.5909 | 3.47 | 1500 | 0.6199 | 0.6813 |
| 0.5909 | 3.7 | 1600 | 0.6230 | 0.5919 |
| 0.5909 | 3.94 | 1700 | 0.5721 | 0.6427 |
| 0.5909 | 4.17 | 1800 | 0.5331 | 0.5867 |
| 0.5909 | 4.4 | 1900 | 0.5561 | 0.6007 |
| 0.4607 | 4.63 | 2000 | 0.5414 | 0.5849 |
| 0.4607 | 4.86 | 2100 | 0.5390 | 0.5587 |
| 0.4607 | 5.09 | 2200 | 0.5313 | 0.5569 |
| 0.4607 | 5.32 | 2300 | 0.5893 | 0.5797 |
| 0.4607 | 5.56 | 2400 | 0.5507 | 0.5954 |
| 0.3933 | 5.79 | 2500 | 0.5521 | 0.6025 |
| 0.3933 | 6.02 | 2600 | 0.5663 | 0.5989 |
| 0.3933 | 6.25 | 2700 | 0.5636 | 0.5832 |
| 0.3933 | 6.48 | 2800 | 0.5464 | 0.5919 |
| 0.3933 | 6.71 | 2900 | 0.5623 | 0.5832 |
| 0.3367 | 6.94 | 3000 | 0.5324 | 0.5692 |
| 0.3367 | 7.18 | 3100 | 0.5907 | 0.5394 |
| 0.3367 | 7.41 | 3200 | 0.5653 | 0.5814 |
| 0.3367 | 7.64 | 3300 | 0.5707 | 0.5814 |
| 0.3367 | 7.87 | 3400 | 0.5754 | 0.5429 |
| 0.2856 | 8.1 | 3500 | 0.5953 | 0.5569 |
| 0.2856 | 8.33 | 3600 | 0.6275 | 0.5394 |
| 0.2856 | 8.56 | 3700 | 0.6253 | 0.5569 |
| 0.2856 | 8.8 | 3800 | 0.5930 | 0.5429 |
| 0.2856 | 9.03 | 3900 | 0.6082 | 0.5219 |
| 0.2522 | 9.26 | 4000 | 0.6026 | 0.5447 |
| 0.2522 | 9.49 | 4100 | 0.6052 | 0.5271 |
| 0.2522 | 9.72 | 4200 | 0.5871 | 0.5219 |
| 0.2522 | 9.95 | 4300 | 0.5870 | 0.5236 |
| 0.2522 | 10.19 | 4400 | 0.5881 | 0.5131 |
| 0.2167 | 10.42 | 4500 | 0.6122 | 0.5289 |
| 0.2167 | 10.65 | 4600 | 0.6128 | 0.5166 |
| 0.2167 | 10.88 | 4700 | 0.6135 | 0.5377 |
| 0.2167 | 11.11 | 4800 | 0.6055 | 0.5184 |
| 0.2167 | 11.34 | 4900 | 0.6725 | 0.5569 |
| 0.1965 | 11.57 | 5000 | 0.6482 | 0.5429 |
| 0.1965 | 11.81 | 5100 | 0.6037 | 0.5096 |
| 0.1965 | 12.04 | 5200 | 0.5931 | 0.5131 |
| 0.1965 | 12.27 | 5300 | 0.5853 | 0.5114 |
| 0.1965 | 12.5 | 5400 | 0.5798 | 0.5219 |
| 0.172 | 12.73 | 5500 | 0.5775 | 0.5009 |
| 0.172 | 12.96 | 5600 | 0.5782 | 0.5044 |
| 0.172 | 13.19 | 5700 | 0.5804 | 0.5184 |
| 0.172 | 13.43 | 5800 | 0.5977 | 0.5219 |
| 0.172 | 13.66 | 5900 | 0.6069 | 0.5236 |
| 0.1622 | 13.89 | 6000 | 0.5850 | 0.5131 |
| 0.1622 | 14.12 | 6100 | 0.5758 | 0.5096 |
| 0.1622 | 14.35 | 6200 | 0.5752 | 0.5009 |
| 0.1622 | 14.58 | 6300 | 0.5727 | 0.5184 |
| 0.1622 | 14.81 | 6400 | 0.5795 | 0.5044 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
FrGes/xlm-roberta-large-finetuned-EUJAV-datasetAB | c71556f3e846301a6346f5d6ca0873910657e631 | 2022-05-18T11:30:34.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | FrGes | null | FrGes/xlm-roberta-large-finetuned-EUJAV-datasetAB | 4 | null | transformers | 19,765 | Fine-tuned model based on
#XLM-RoBERTa (large-sized model)
Data for finetuning:
Italian vaccine stance data: 1042 training tweets and 348 evaluation tweets
#BibTeX entry and citation info
to be added |
ruselkomp/deep-pavlov-full | 5c3422a14785faf2ce3f5cb508d0f4a8c9e969e9 | 2022-05-18T17:16:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/deep-pavlov-full | 4 | null | transformers | 19,766 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-0 | 92477fd3b87e6b7ba138993fce0afca63a3a9f81 | 2022-05-18T18:18:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-0 | 4 | null | transformers | 19,767 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-1 | f768383e64e031040768ff1f07e2c53f443e65bd | 2022-05-18T18:19:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-1 | 4 | null | transformers | 19,768 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-2 | e659b6177f0b6f5c535ed327743ef3b636ebf72d | 2022-05-18T18:21:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-2 | 4 | null | transformers | 19,769 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-54 | c82a34ce5945b0e7d31e98dfa4b76fe660e89ac5 | 2022-05-18T18:36:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-54 | 4 | null | transformers | 19,770 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-55 | c802bd6c006711c3f42d71507bf5405054b9ec29 | 2022-05-18T18:38:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-55 | 4 | null | transformers | 19,771 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-64 | 141d7b2fa3ebbf6adab02f5fd47c404616fd60ff | 2022-05-18T18:42:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-64 | 4 | null | transformers | 19,772 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-65 | 4e069de87a60fcbdc5dc39dfd4d6bfcf78b5dd50 | 2022-05-18T18:43:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-65 | 4 | null | transformers | 19,773 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-66 | caa3ea919089e31dd0a1cdc3e9ecfac22ce19c27 | 2022-05-18T18:45:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-66 | 4 | null | transformers | 19,774 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-67 | 41087a889a4653ae1e18ed14b9dab13265463ac7 | 2022-05-18T18:47:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-67 | 4 | null | transformers | 19,775 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-68 | 4149ace1c3790a4dc76eb34d0f993d6586242398 | 2022-05-18T18:49:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-68 | 4 | null | transformers | 19,776 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-69 | d6a1e492e6902e2bb05b959228cd6faaa4d430c1 | 2022-05-18T18:51:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-69 | 4 | null | transformers | 19,777 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-70 | 06b3cddc30a7e06ae63e7ceaae994b0ab2e99096 | 2022-05-18T18:53:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-70 | 4 | null | transformers | 19,778 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-77 | 70619ad7a0c59e902720f7eb41fa77b645495033 | 2022-05-18T19:05:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-77 | 4 | null | transformers | 19,779 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-81 | 0b8ff79a77bbeea5cba6ae6b08d69d76523319c4 | 2022-05-18T19:13:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-81 | 4 | null | transformers | 19,780 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-82 | 8e1bcbafca79c5b7240e9685330c50c0262ff44d | 2022-05-18T19:15:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-82 | 4 | null | transformers | 19,781 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-83 | 210d6c1451557ef94baa6925fd5f578a1680b687 | 2022-05-18T19:16:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-83 | 4 | null | transformers | 19,782 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-84 | 2b6cf61d9327ff91b2e8cb3ca2bd806f819067b5 | 2022-05-18T19:18:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-84 | 4 | null | transformers | 19,783 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-87 | 353d81b3c74bd6313dd2bb18be5f583ce6a87b86 | 2022-05-18T19:24:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-87 | 4 | null | transformers | 19,784 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-88 | 88f9e05be89b1d0156dc7b04067a37e128e4b043 | 2022-05-18T19:25:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-88 | 4 | null | transformers | 19,785 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-89 | 09a6c35e572dfebac6cd3cd2237337a81fb90c93 | 2022-05-18T19:27:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-89 | 4 | null | transformers | 19,786 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-90 | bee3002fa5ad9eb9d257b493e7c8760858caf256 | 2022-05-18T19:29:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-90 | 4 | null | transformers | 19,787 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-91 | 728c2a372298f1d9691e09ce905250b7e2af776c | 2022-05-18T19:31:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-91 | 4 | null | transformers | 19,788 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-96 | 7e9e133e28c63b3ef9c7123a3ca68b57bbe02653 | 2022-05-18T19:34:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-96 | 4 | null | transformers | 19,789 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-97 | adf2326998806ff0f8ce45c57e14b8f7314bbf4a | 2022-05-18T19:35:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-97 | 4 | null | transformers | 19,790 | Entry not found |
Jeevesh8/512seq_len_6ep_bert_ft_cola-98 | e320f68e7c27526eec33fa729c58baf761f6cd1f | 2022-05-18T19:37:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/512seq_len_6ep_bert_ft_cola-98 | 4 | null | transformers | 19,791 | Entry not found |
Suhong/distilbert-base-uncased-emoji_mask_wearing | 4349919374e121a536315f4a7c2822a4ec086d30 | 2022-05-19T12:50:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Suhong | null | Suhong/distilbert-base-uncased-emoji_mask_wearing | 4 | null | transformers | 19,792 | Entry not found |
calcworks/distilbert-base-uncased-finetuned-clinc | f710dd9bfa22a4aa4d7b50dc4b5bd50bc13f48a0 | 2022-05-19T16:55:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | calcworks | null | calcworks/distilbert-base-uncased-finetuned-clinc | 4 | null | transformers | 19,793 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7755
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2893 | 1.0 | 318 | 3.2831 | 0.7403 |
| 2.629 | 2.0 | 636 | 1.8731 | 0.8348 |
| 1.5481 | 3.0 | 954 | 1.1581 | 0.8906 |
| 1.0137 | 4.0 | 1272 | 0.8585 | 0.9077 |
| 0.797 | 5.0 | 1590 | 0.7755 | 0.9161 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rsd16/wav2vec2-large-xlsr-53-fine-tuned-farsi | 4710ee40db817eae20f4f25be017dd243d9f188f | 2022-05-20T10:18:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rsd16 | null | rsd16/wav2vec2-large-xlsr-53-fine-tuned-farsi | 4 | null | transformers | 19,794 | Entry not found |
papsebestyen/hubert-base-cc-finance-filter | 05818d13501250c39f28443c254834c184924a6b | 2022-05-19T19:31:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | papsebestyen | null | papsebestyen/hubert-base-cc-finance-filter | 4 | null | transformers | 19,795 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: hubert-base-cc-finance-filter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-cc-finance-filter
This model is a fine-tuned version of [papsebestyen/hubert-base-cc-finetuned-forum](https://huggingface.co/papsebestyen/hubert-base-cc-finetuned-forum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5388
- F1: 0.7671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.887995089067299e-05
- train_batch_size: 60
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 160.18013334673049
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5717 | 1.0 | 54 | 0.6918 | 0.624 |
| 0.4104 | 2.0 | 108 | 0.4236 | 0.7119 |
| 0.3124 | 3.0 | 162 | 0.6001 | 0.7451 |
| 0.1404 | 4.0 | 216 | 0.5388 | 0.7671 |
| 0.1305 | 5.0 | 270 | 0.5388 | 0.7671 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0a0+17540c5
- Datasets 2.2.1
- Tokenizers 0.12.1
|
jonfrank/mt5-small-finetuned-amazon-en-es | 9f5c5188a71724a2a7f3607778bc9f7eb628de19 | 2022-05-19T17:49:31.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | jonfrank | null | jonfrank/mt5-small-finetuned-amazon-en-es | 4 | null | transformers | 19,796 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It was created by following the [huggingface tutorial](https://huggingface.co/course/chapter7/5?fw=pt).
It achieves the following results on the evaluation set:
- Loss: 3.0173
- Rouge1: 16.7977
- Rouge2: 8.6849
- Rougel: 16.4822
- Rougelsum: 16.4975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.4693 | 1.0 | 1209 | 3.1215 | 17.5363 | 8.3875 | 17.0229 | 16.9653 |
| 3.4231 | 2.0 | 2418 | 3.0474 | 16.7927 | 8.3533 | 16.2748 | 16.2379 |
| 3.271 | 3.0 | 3627 | 3.0440 | 16.7233 | 7.9129 | 16.2385 | 16.1915 |
| 3.1885 | 4.0 | 4836 | 3.0264 | 16.3078 | 7.5751 | 15.844 | 15.889 |
| 3.1216 | 5.0 | 6045 | 3.0277 | 17.259 | 8.7504 | 16.8293 | 16.8543 |
| 3.0739 | 6.0 | 7254 | 3.0188 | 16.8374 | 8.6457 | 16.4407 | 16.4743 |
| 3.0393 | 7.0 | 8463 | 3.0161 | 17.3064 | 8.7822 | 16.9423 | 16.9543 |
| 3.0202 | 8.0 | 9672 | 3.0173 | 16.7977 | 8.6849 | 16.4822 | 16.4975 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Ukhushn/distilbert-base-uncased-finetuned-homedepot | 86f5dede53642b5e5f8f3318f227bc9501a801a3 | 2022-05-19T22:15:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Ukhushn | null | Ukhushn/distilbert-base-uncased-finetuned-homedepot | 4 | null | transformers | 19,797 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-homedepot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-homedepot
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.9909 | 1.0 | 4688 | 2.5285 |
| 2.5495 | 2.0 | 9376 | 2.3476 |
| 2.4198 | 3.0 | 14064 | 2.2841 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
pszemraj/opt-peter-1.3B | bff665b33970d05a9eddab6c6fdae2a232d1a74a | 2022-06-24T14:06:12.000Z | [
"pytorch",
"tensorboard",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"non-commercial",
"dialogue",
"chatbot",
"license:apache-2.0"
] | text-generation | false | pszemraj | null | pszemraj/opt-peter-1.3B | 4 | null | transformers | 19,798 | ---
license: apache-2.0
tags:
- generated_from_trainer
- text-generation
- opt
- non-commercial
- dialogue
- chatbot
widget:
- text: "If you could live anywhere, where would it be? peter szemraj:"
example_title: "live anywhere"
- text: "What would you sing at Karaoke night? peter szemraj:"
example_title: "Karaoke"
- text: "If you could hire someone to help you, would it be with cleaning, cooking, or yard work? peter szemraj:"
example_title: "help"
- text: "What form of public transportation do you prefer? (air, boat, train, bus, car, etc.) peter szemraj:"
example_title: "transportation"
- text: "What's your favorite zoo animal? peter szemraj:"
example_title: "animal"
- text: "Do you like or dislike surprises? Why or why not? peter szemraj:"
example_title: "surprises"
- text: "What celebrity would you like to meet at Starbucks for a cup of coffee? peter szemraj:"
example_title: "celebrity "
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.7
temperature: 0.3
no_repeat_ngram_size: 2
top_k: 20
do_sample: True
repetition_penalty: 4.5
---
# pszemraj/opt-peter-1.3B
This model is a fine-tuned version of [pszemraj/opt-peter-1.3B-1E](https://huggingface.co/pszemraj/opt-peter-1.3B-1E) on 80k Whatsapp/iMessages (mine).
It achieves the following results on the evaluation set, after training for 1 epoch (_on top of the 1E checkpoint linked above_):
- eval_loss: 3.4220
- eval_runtime: 954.9678
- eval_samples_per_second: 9.114
- eval_steps_per_second: 2.279
- epoch: 1.0
- step: 1235
## Model description
- Exploring to see how OPT does in terms of dialogue/conversational applications :)
- Seems to do a lot better than GPT-Neo with similar training parameters
## Intended uses & limitations
- OPT has a license that does not allow for commercial use, see original for details
- **any statements or claims made by this model do not reflect actual claims/statements by me**
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 2
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
allenai/tk-instruct-small-def-pos | 4436f5351392fbe3c3e6718386d6feaeda9eaf6b | 2022-05-27T06:28:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:natural instructions v2.0",
"arxiv:1910.10683",
"arxiv:2204.07705",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/tk-instruct-small-def-pos | 4 | null | transformers | 19,799 | ---
language: en
license: apache-2.0
datasets:
- natural instructions v2.0
---
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.