modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
negfir/bert_uncased_L-2_H-256_A-4_wiki103 | 9e818067d1e3b3a0fbc28815deda8a14b5ae1b29 | 2022-05-17T04:55:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-2_H-256_A-4_wiki103 | 2 | null | transformers | 26,000 | Entry not found |
PSW/cnndm_0.5percent_minsimins_seed27 | a5f5c9feb595a04a2d869ba59bb3d2d964bbb7bc | 2022-05-17T05:46:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_minsimins_seed27 | 2 | null | transformers | 26,001 | Entry not found |
PontifexMaximus/opus-mt-en-ro-finetuned-en-to-ro | 9887bdb130408b5f57a2823709aed940c6463eea | 2022-05-17T08:52:59.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | PontifexMaximus | null | PontifexMaximus/opus-mt-en-ro-finetuned-en-to-ro | 2 | null | transformers | 26,002 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
PSW/cnndm_0.5percent_maxsimins_seed27 | a80e2fe375b68c05963b4fa8803bd036139e76f6 | 2022-05-17T09:18:21.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_maxsimins_seed27 | 2 | null | transformers | 26,003 | Entry not found |
PSW/cnndm_0.5percent_maxsimins_seed42 | 8d1e76c9c8535c722884401bba5dec081350eb4b | 2022-05-17T10:30:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_maxsimins_seed42 | 2 | null | transformers | 26,004 | Entry not found |
lilitket/20220517-150219 | bb9f2770858c222cde571abea77b6a1ed8fd17db | 2022-05-17T14:25:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220517-150219 | 2 | null | transformers | 26,005 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20220517-150219
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20220517-150219
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2426
- Wer: 0.2344
- Cer: 0.0434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 1339
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 5.3867 | 0.02 | 200 | 3.2171 | 1.0 | 1.0 |
| 3.1288 | 0.04 | 400 | 2.9394 | 1.0 | 1.0 |
| 1.8298 | 0.06 | 600 | 0.9138 | 0.8416 | 0.2039 |
| 0.9751 | 0.07 | 800 | 0.6568 | 0.6928 | 0.1566 |
| 0.7934 | 0.09 | 1000 | 0.5314 | 0.6225 | 0.1277 |
| 0.663 | 0.11 | 1200 | 0.4759 | 0.5730 | 0.1174 |
| 0.617 | 0.13 | 1400 | 0.4515 | 0.5578 | 0.1118 |
| 0.5473 | 0.15 | 1600 | 0.4017 | 0.5157 | 0.1004 |
| 0.5283 | 0.17 | 1800 | 0.3872 | 0.5094 | 0.0982 |
| 0.4893 | 0.18 | 2000 | 0.3725 | 0.4860 | 0.0932 |
| 0.495 | 0.2 | 2200 | 0.3580 | 0.4542 | 0.0878 |
| 0.4438 | 0.22 | 2400 | 0.3443 | 0.4366 | 0.0858 |
| 0.4425 | 0.24 | 2600 | 0.3428 | 0.4284 | 0.0865 |
| 0.4293 | 0.26 | 2800 | 0.3329 | 0.4221 | 0.0819 |
| 0.3779 | 0.28 | 3000 | 0.3278 | 0.4146 | 0.0794 |
| 0.4116 | 0.29 | 3200 | 0.3242 | 0.4107 | 0.0757 |
| 0.3912 | 0.31 | 3400 | 0.3217 | 0.4040 | 0.0776 |
| 0.391 | 0.33 | 3600 | 0.3127 | 0.3955 | 0.0764 |
| 0.3696 | 0.35 | 3800 | 0.3153 | 0.3892 | 0.0748 |
| 0.3576 | 0.37 | 4000 | 0.3156 | 0.3846 | 0.0737 |
| 0.3553 | 0.39 | 4200 | 0.3024 | 0.3814 | 0.0726 |
| 0.3394 | 0.4 | 4400 | 0.3022 | 0.3637 | 0.0685 |
| 0.3345 | 0.42 | 4600 | 0.3130 | 0.3641 | 0.0698 |
| 0.3357 | 0.44 | 4800 | 0.2913 | 0.3602 | 0.0701 |
| 0.3411 | 0.46 | 5000 | 0.2941 | 0.3514 | 0.0674 |
| 0.3031 | 0.48 | 5200 | 0.3043 | 0.3613 | 0.0685 |
| 0.3305 | 0.5 | 5400 | 0.2967 | 0.3468 | 0.0657 |
| 0.3004 | 0.51 | 5600 | 0.2723 | 0.3309 | 0.0616 |
| 0.31 | 0.53 | 5800 | 0.2835 | 0.3404 | 0.0648 |
| 0.3224 | 0.55 | 6000 | 0.2743 | 0.3358 | 0.0622 |
| 0.3261 | 0.57 | 6200 | 0.2803 | 0.3358 | 0.0620 |
| 0.305 | 0.59 | 6400 | 0.2835 | 0.3397 | 0.0629 |
| 0.3025 | 0.61 | 6600 | 0.2684 | 0.3340 | 0.0639 |
| 0.2952 | 0.62 | 6800 | 0.2654 | 0.3256 | 0.0617 |
| 0.2903 | 0.64 | 7000 | 0.2588 | 0.3174 | 0.0596 |
| 0.2907 | 0.66 | 7200 | 0.2789 | 0.3256 | 0.0623 |
| 0.2887 | 0.68 | 7400 | 0.2634 | 0.3142 | 0.0605 |
| 0.291 | 0.7 | 7600 | 0.2644 | 0.3097 | 0.0582 |
| 0.2646 | 0.72 | 7800 | 0.2753 | 0.3089 | 0.0582 |
| 0.2683 | 0.73 | 8000 | 0.2703 | 0.3036 | 0.0574 |
| 0.2808 | 0.75 | 8200 | 0.2544 | 0.2994 | 0.0561 |
| 0.2724 | 0.77 | 8400 | 0.2584 | 0.3051 | 0.0592 |
| 0.2516 | 0.79 | 8600 | 0.2575 | 0.2959 | 0.0557 |
| 0.2561 | 0.81 | 8800 | 0.2594 | 0.2945 | 0.0552 |
| 0.264 | 0.83 | 9000 | 0.2607 | 0.2987 | 0.0552 |
| 0.2383 | 0.84 | 9200 | 0.2641 | 0.2983 | 0.0546 |
| 0.2548 | 0.86 | 9400 | 0.2714 | 0.2930 | 0.0538 |
| 0.2284 | 0.88 | 9600 | 0.2542 | 0.2945 | 0.0555 |
| 0.2354 | 0.9 | 9800 | 0.2564 | 0.2937 | 0.0551 |
| 0.2624 | 0.92 | 10000 | 0.2466 | 0.2891 | 0.0542 |
| 0.24 | 0.94 | 10200 | 0.2404 | 0.2895 | 0.0528 |
| 0.2372 | 0.95 | 10400 | 0.2590 | 0.2782 | 0.0518 |
| 0.2357 | 0.97 | 10600 | 0.2629 | 0.2867 | 0.0531 |
| 0.2439 | 0.99 | 10800 | 0.2722 | 0.2902 | 0.0556 |
| 0.2204 | 1.01 | 11000 | 0.2618 | 0.2856 | 0.0535 |
| 0.2043 | 1.03 | 11200 | 0.2662 | 0.2789 | 0.0520 |
| 0.2081 | 1.05 | 11400 | 0.2744 | 0.2831 | 0.0532 |
| 0.199 | 1.06 | 11600 | 0.2586 | 0.2800 | 0.0519 |
| 0.2063 | 1.08 | 11800 | 0.2711 | 0.2842 | 0.0531 |
| 0.2116 | 1.1 | 12000 | 0.2463 | 0.2782 | 0.0529 |
| 0.2095 | 1.12 | 12200 | 0.2371 | 0.2757 | 0.0510 |
| 0.1786 | 1.14 | 12400 | 0.2693 | 0.2768 | 0.0520 |
| 0.1999 | 1.16 | 12600 | 0.2625 | 0.2793 | 0.0513 |
| 0.1985 | 1.17 | 12800 | 0.2734 | 0.2796 | 0.0532 |
| 0.187 | 1.19 | 13000 | 0.2654 | 0.2676 | 0.0514 |
| 0.188 | 1.21 | 13200 | 0.2548 | 0.2648 | 0.0489 |
| 0.1853 | 1.23 | 13400 | 0.2684 | 0.2641 | 0.0509 |
| 0.197 | 1.25 | 13600 | 0.2589 | 0.2662 | 0.0507 |
| 0.1873 | 1.27 | 13800 | 0.2633 | 0.2686 | 0.0516 |
| 0.179 | 1.28 | 14000 | 0.2682 | 0.2598 | 0.0508 |
| 0.2008 | 1.3 | 14200 | 0.2505 | 0.2609 | 0.0493 |
| 0.1802 | 1.32 | 14400 | 0.2470 | 0.2598 | 0.0493 |
| 0.1903 | 1.34 | 14600 | 0.2572 | 0.2672 | 0.0500 |
| 0.1852 | 1.36 | 14800 | 0.2576 | 0.2633 | 0.0491 |
| 0.1933 | 1.38 | 15000 | 0.2649 | 0.2602 | 0.0493 |
| 0.191 | 1.4 | 15200 | 0.2578 | 0.2612 | 0.0484 |
| 0.1863 | 1.41 | 15400 | 0.2572 | 0.2566 | 0.0488 |
| 0.1785 | 1.43 | 15600 | 0.2661 | 0.2520 | 0.0478 |
| 0.1755 | 1.45 | 15800 | 0.2637 | 0.2605 | 0.0485 |
| 0.1677 | 1.47 | 16000 | 0.2481 | 0.2559 | 0.0478 |
| 0.1633 | 1.49 | 16200 | 0.2584 | 0.2531 | 0.0476 |
| 0.166 | 1.51 | 16400 | 0.2576 | 0.2595 | 0.0487 |
| 0.1798 | 1.52 | 16600 | 0.2517 | 0.2570 | 0.0488 |
| 0.1879 | 1.54 | 16800 | 0.2555 | 0.2531 | 0.0479 |
| 0.1636 | 1.56 | 17000 | 0.2419 | 0.2467 | 0.0464 |
| 0.1706 | 1.58 | 17200 | 0.2426 | 0.2457 | 0.0463 |
| 0.1763 | 1.6 | 17400 | 0.2427 | 0.2496 | 0.0467 |
| 0.1687 | 1.62 | 17600 | 0.2507 | 0.2496 | 0.0467 |
| 0.1662 | 1.63 | 17800 | 0.2553 | 0.2474 | 0.0466 |
| 0.1637 | 1.65 | 18000 | 0.2576 | 0.2450 | 0.0461 |
| 0.1744 | 1.67 | 18200 | 0.2394 | 0.2414 | 0.0454 |
| 0.1597 | 1.69 | 18400 | 0.2442 | 0.2443 | 0.0452 |
| 0.1606 | 1.71 | 18600 | 0.2488 | 0.2435 | 0.0453 |
| 0.1558 | 1.73 | 18800 | 0.2563 | 0.2464 | 0.0464 |
| 0.172 | 1.74 | 19000 | 0.2501 | 0.2411 | 0.0452 |
| 0.1594 | 1.76 | 19200 | 0.2481 | 0.2460 | 0.0458 |
| 0.1732 | 1.78 | 19400 | 0.2427 | 0.2414 | 0.0443 |
| 0.1706 | 1.8 | 19600 | 0.2367 | 0.2418 | 0.0446 |
| 0.1724 | 1.82 | 19800 | 0.2376 | 0.2390 | 0.0444 |
| 0.1621 | 1.84 | 20000 | 0.2430 | 0.2382 | 0.0438 |
| 0.1501 | 1.85 | 20200 | 0.2445 | 0.2404 | 0.0438 |
| 0.1526 | 1.87 | 20400 | 0.2472 | 0.2361 | 0.0436 |
| 0.1756 | 1.89 | 20600 | 0.2431 | 0.2400 | 0.0437 |
| 0.1598 | 1.91 | 20800 | 0.2472 | 0.2368 | 0.0439 |
| 0.1554 | 1.93 | 21000 | 0.2431 | 0.2347 | 0.0435 |
| 0.1354 | 1.95 | 21200 | 0.2427 | 0.2354 | 0.0438 |
| 0.1587 | 1.96 | 21400 | 0.2427 | 0.2347 | 0.0435 |
| 0.1541 | 1.98 | 21600 | 0.2426 | 0.2344 | 0.0434 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
PSW/cnndm_0.5percent_randomsimins_seed27 | c0c31b0cd8c90c1b99c767c4cf48e9ec0772617b | 2022-05-17T12:59:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_randomsimins_seed27 | 2 | null | transformers | 26,006 | Entry not found |
PSW/cnndm_0.5percent_randomsimins_seed42 | 2c7422479b37d02f1e5bcee17e53f66b1c5db196 | 2022-05-17T14:13:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_randomsimins_seed42 | 2 | null | transformers | 26,007 | Entry not found |
negfir/bert_uncased_L-10_H-768_A-12_wiki103 | 684e4a056cbefa2e5d51d6538ca32683e4b6112d | 2022-05-17T14:09:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-768_A-12_wiki103 | 2 | null | transformers | 26,008 | Entry not found |
MeshalAlamr/wav2vec2-xls-r-300m-ar-7 | 0c962e64f5db617c98759909ef43018eda6eaff0 | 2022-05-18T01:13:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | MeshalAlamr | null | MeshalAlamr/wav2vec2-xls-r-300m-ar-7 | 2 | null | transformers | 26,009 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-ar-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar-7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 61.6652
- Wer: 0.2222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6306.7719 | 4.71 | 400 | 617.7255 | 1.0 |
| 1222.8073 | 9.41 | 800 | 81.7446 | 0.3820 |
| 326.9842 | 14.12 | 1200 | 67.3986 | 0.2859 |
| 223.859 | 18.82 | 1600 | 60.8896 | 0.2492 |
| 175.5662 | 23.53 | 2000 | 59.2339 | 0.2256 |
| 146.3602 | 28.24 | 2400 | 61.6652 | 0.2222 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Harsit/mt5-small-finetuned-multilingual-xlsum-new | 384c280319ffe67ebe63a46a956a1f0c7f35af6e | 2022-05-18T01:58:59.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"multilingual model",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Harsit | null | Harsit/mt5-small-finetuned-multilingual-xlsum-new | 2 | null | transformers | 26,010 | ---
license: apache-2.0
tags:
- multilingual model
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-multilingual-xlsum-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-multilingual-xlsum-new
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the Xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7436
- Rouge1: 9.3908
- Rouge2: 2.5077
- RougeL: 7.8615
- Rougelsum: 7.8745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.8301 | 1.0 | 3375 | 2.8828 | 8.1957 | 1.9439 | 6.8031 | 6.8206 |
| 3.4032 | 2.0 | 6750 | 2.8049 | 8.9533 | 2.2919 | 7.4137 | 7.4244 |
| 3.3697 | 3.0 | 10125 | 2.7743 | 9.3366 | 2.4531 | 7.8129 | 7.8276 |
| 3.3862 | 4.0 | 13500 | 2.7500 | 9.4377 | 2.542 | 7.9123 | 7.9268 |
| 3.1704 | 5.0 | 16875 | 2.7436 | 9.3908 | 2.5077 | 7.8615 | 7.8745 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
lilitket/20220517-184412 | 21df154c093eaec93b422ff80f7454dfc0f2d319 | 2022-05-17T15:26:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220517-184412 | 2 | null | transformers | 26,011 | Entry not found |
Mathilda/T5-para-Quora | 127ecfbabd1c4df12df38cd67231747af5f5dbc5 | 2022-05-17T20:43:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | Mathilda | null | Mathilda/T5-para-Quora | 2 | null | transformers | 26,012 | ---
license: afl-3.0
---
|
PSW/cnndm_0.5percent_min2swap_seed27 | c66f906c641f5d9b5bb3f30388abb0781d508e9d | 2022-05-17T20:15:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_min2swap_seed27 | 2 | null | transformers | 26,013 | Entry not found |
CEBaB/gpt2.CEBaB.absa.exclusive.seed_88 | ca189b8a71c8994e61c805999c82d02b2369d1c6 | 2022-05-17T20:38:07.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.exclusive.seed_88 | 2 | null | transformers | 26,014 | Entry not found |
PSW/cnndm_0.5percent_min2swap_seed42 | 09dace0a1dd03f8d3effa7c068435e117e17753e | 2022-05-17T21:28:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_min2swap_seed42 | 2 | null | transformers | 26,015 | Entry not found |
PSW/cnndm_0.5percent_max2swap_seed27 | 6a8ca6cc8020daa38c1496aab9a394ee74c5b5e2 | 2022-05-17T23:47:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_max2swap_seed27 | 2 | null | transformers | 26,016 | Entry not found |
CEBaB/gpt2.CEBaB.absa.inclusive.seed_42 | 5283b39ce497165543a7c7ed93b99c23286fee0b | 2022-05-17T23:47:03.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.inclusive.seed_42 | 2 | null | transformers | 26,017 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_42 | e2e0cf056aa654c4aaebb4f818e3bce5891e9d16 | 2022-05-17T23:53:21.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_42 | 2 | null | transformers | 26,018 | Entry not found |
CEBaB/gpt2.CEBaB.absa.inclusive.seed_66 | fbf158b104086b4753b317625144976dd2b94191 | 2022-05-18T00:03:36.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.inclusive.seed_66 | 2 | null | transformers | 26,019 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_66 | a2534382aad385b52fa6c50078f8b9873815030e | 2022-05-18T00:10:01.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_66 | 2 | null | transformers | 26,020 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_77 | 27f327d5ed847075bb00953921f5a89b41b87c73 | 2022-05-18T00:26:53.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_77 | 2 | null | transformers | 26,021 | Entry not found |
CEBaB/gpt2.CEBaB.absa.inclusive.seed_88 | a0d2a17c068b289a47063be0b3c7f183cf393433 | 2022-05-18T00:38:25.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.inclusive.seed_88 | 2 | null | transformers | 26,022 | Entry not found |
PSW/cnndm_0.5percent_max2swap_seed42 | b61881ed1c6304588a45efb212ffe927b712c4a3 | 2022-05-18T00:59:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_max2swap_seed42 | 2 | null | transformers | 26,023 | Entry not found |
CEBaB/gpt2.CEBaB.absa.inclusive.seed_99 | 9f26f445fb2cab22364df92db100a14a0e64b787 | 2022-05-18T00:55:13.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.inclusive.seed_99 | 2 | null | transformers | 26,024 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_99 | eddbeaacc2b81d0d64ea531e6eb5cab54bdd704f | 2022-05-18T01:01:30.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_99 | 2 | null | transformers | 26,025 | Entry not found |
PSW/cnndm_0.5percent_randomswap_seed27 | d234f18a52549957eeb09718ff4429f33f7502b5 | 2022-05-18T03:18:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_randomswap_seed27 | 2 | null | transformers | 26,026 | Entry not found |
PSW/cnndm_0.5percent_randomswap_seed42 | 93592e4debec0f8e4d940e053c78110079d62ff5 | 2022-05-18T04:30:37.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_randomswap_seed42 | 2 | null | transformers | 26,027 | Entry not found |
negfir/bert_uncased_L-10_H-512_A-8_wiki103 | f0026fcaaff29ad9c0c62e4431c86f93d8cc2403 | 2022-05-18T06:22:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-512_A-8_wiki103 | 2 | null | transformers | 26,028 | Entry not found |
rickySaka/eng-med | ce7172189f69f441f4f17be77121d92fb2f18d42 | 2022-05-18T07:38:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | rickySaka | null | rickySaka/eng-med | 2 | null | transformers | 26,029 | Entry not found |
nthanhha26/autotrain-test-project-879428192 | ed55014f89a81af3c63877f5c9a3da460b7bd4f5 | 2022-05-18T08:28:19.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:nthanhha26/autotrain-data-test-project",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | nthanhha26 | null | nthanhha26/autotrain-test-project-879428192 | 2 | null | transformers | 26,030 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- nthanhha26/autotrain-data-test-project
co2_eq_emissions: 13.170344687762716
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 879428192
- CO2 Emissions (in grams): 13.170344687762716
## Validation Metrics
- Loss: 0.06465228646993637
- Accuracy: 0.9796652588768966
- Precision: 0.9843385538153949
- Recall: 0.993943472409152
- AUC: 0.9855992605071237
- F1: 0.9891176963000168
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/nthanhha26/autotrain-test-project-879428192
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nthanhha26/autotrain-test-project-879428192", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("nthanhha26/autotrain-test-project-879428192", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
PSW/cnndm_0.1percent_baseline_seed1 | 9fbbdaeb1cba3120c8e7a638311e4fcd661b27b3 | 2022-05-18T14:42:17.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_baseline_seed1 | 2 | null | transformers | 26,031 | Entry not found |
RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab | 403a2d58c35e06b187b7cd7de5e7823468cb95e6 | 2022-05-18T19:16:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:li_singlish",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | RuiqianLi | null | RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab | 2 | 1 | transformers | 26,032 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- li_singlish
model-index:
- name: wav2vec2-large-xls-r-300m-singlish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-singlish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the li_singlish dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7199
- Wer: 0.3337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2984 | 4.76 | 400 | 2.9046 | 1.0 |
| 1.1895 | 9.52 | 800 | 0.7725 | 0.4535 |
| 0.1331 | 14.28 | 1200 | 0.7068 | 0.3847 |
| 0.0701 | 19.05 | 1600 | 0.7547 | 0.3617 |
| 0.0509 | 23.8 | 2000 | 0.7123 | 0.3444 |
| 0.0385 | 28.57 | 2400 | 0.7199 | 0.3337 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
EddieChen372/CodeBerta-finetuned-react | b49b129cbc7415b8907da059d314a0c4f209235a | 2022-05-18T18:53:00.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | EddieChen372 | null | EddieChen372/CodeBerta-finetuned-react | 2 | null | transformers | 26,033 | ---
tags:
- generated_from_trainer
model-index:
- name: CodeBerta-finetuned-react
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CodeBerta-finetuned-react
This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8968 | 1.0 | 157 | 3.2166 |
| 3.1325 | 2.0 | 314 | 2.9491 |
| 2.9744 | 3.0 | 471 | 2.7887 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-8_H-768_A-12_wiki103 | 9b52d56f54f0aac287ce65f5b0e728af60e2e8ca | 2022-05-18T16:01:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-8_H-768_A-12_wiki103 | 2 | null | transformers | 26,034 | Entry not found |
carlosaguayo/features_and_usecases_05182022_1245 | 597004638cded0e66e82a0d8615091b5fac5e448 | 2022-05-18T16:45:57.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | carlosaguayo | null | carlosaguayo/features_and_usecases_05182022_1245 | 2 | null | sentence-transformers | 26,035 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# carlosaguayo/features_and_usecases_05182022_1245
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('carlosaguayo/features_and_usecases_05182022_1245')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=carlosaguayo/features_and_usecases_05182022_1245)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 175 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
PSW/cnndm_0.1percent_baseline_seed42 | add1ac417f88f43759ee737f03adff01c5a3a43e | 2022-05-18T17:10:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_baseline_seed42 | 2 | null | transformers | 26,036 | Entry not found |
negfir/bert_uncased_L-10_H-128_A-2_wiki103 | 6204dafde52f8c87fac0bd41b5da7d7e805e499b | 2022-05-18T17:06:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-128_A-2_wiki103 | 2 | null | transformers | 26,037 | Entry not found |
PSW/cnndm_0.5percent_baseline_seed1 | 6394c8ab058fb096d7b6cb64f8b31c466ca11989 | 2022-05-19T08:10:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_baseline_seed1 | 2 | null | transformers | 26,038 | Entry not found |
PSW/cnndm_0.5percent_baseline_seed27 | d2fc8b20527ea445ca20e4da21f05c7545138969 | 2022-05-18T20:43:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_baseline_seed27 | 2 | null | transformers | 26,039 | Entry not found |
negfir/bert_uncased_L-8_H-512_A-8_wiki103 | 7c9690ce50ae7eb6a8950b5bfbcbc22620559ae1 | 2022-05-19T05:31:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-8_H-512_A-8_wiki103 | 2 | null | transformers | 26,040 | Entry not found |
PontifexMaximus/opus-mt-en-de-finetuned-de-to-en | 813ec48d7e22a9794a513797c4d16d4bc1d38678 | 2022-05-19T07:01:23.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | PontifexMaximus | null | PontifexMaximus/opus-mt-en-de-finetuned-de-to-en | 2 | null | transformers | 26,041 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: opus-mt-en-de-finetuned-de-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-de-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
PontifexMaximus/opus-mt-en-ro-finetuned-ro-to-en | b878a8aeb8399903b0ca5c1a063979cd87155960 | 2022-05-19T09:07:32.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | PontifexMaximus | null | PontifexMaximus/opus-mt-en-ro-finetuned-ro-to-en | 2 | null | transformers | 26,042 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: opus-mt-en-ro-finetuned-ro-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-ro-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
moma1820/AUG_XLMR_MARKETING | ebccfd419e903d9d8c876e8df4f0692314b50e81 | 2022-05-19T16:32:21.000Z | [
"pytorch",
"xlm-roberta-xl",
"feature-extraction",
"transformers"
] | feature-extraction | false | moma1820 | null | moma1820/AUG_XLMR_MARKETING | 2 | null | transformers | 26,043 | Entry not found |
sarakolding/daT5-base | 0a659318bb33f7b6cc1237b66c1d815f9c0c1a7a | 2022-05-31T13:18:37.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"da",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sarakolding | null | sarakolding/daT5-base | 2 | 1 | transformers | 26,044 | ---
language:
- da
---
This repository contains a language-specific mT5-base, where the vocabulary is condensed to include tokens used in Danish and English. |
NCAI/Bert_backup | cdbac05a6183f8441147003ac8ae5399cc1a6a52 | 2022-05-21T19:45:14.000Z | [
"pytorch",
"lean_albert",
"transformers"
] | null | false | NCAI | null | NCAI/Bert_backup | 2 | null | transformers | 26,045 | Entry not found |
negfir/bert_uncased_L-8_H-256_A-4_wiki103 | df486e96448b2e14b5032a7d5feb2926f29e3cd2 | 2022-05-19T10:26:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-8_H-256_A-4_wiki103 | 2 | null | transformers | 26,046 | Entry not found |
ViktorDo/bert-base-uncased-scratch-powo_mgh_pt | 9f48768738ce04af3eddc6db130720fd2b15a3e5 | 2022-05-19T11:06:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ViktorDo | null | ViktorDo/bert-base-uncased-scratch-powo_mgh_pt | 2 | null | transformers | 26,047 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-scratch-powo_mgh_pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-scratch-powo_mgh_pt
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.3881 | 3.57 | 200 | 5.2653 |
| 4.7294 | 7.14 | 400 | 4.6365 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
vijaygoriya/test_trainer | 2c6a37dc34fba7693643e0e8a277b96d7bf7034c | 2022-06-15T11:23:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vijaygoriya | null | vijaygoriya/test_trainer | 2 | null | transformers | 26,048 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9646
- Accuracy: 0.8171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4452 | 1.0 | 2000 | 0.5505 | 0.7673 |
| 0.277 | 2.0 | 4000 | 0.7271 | 0.8210 |
| 0.1412 | 3.0 | 6000 | 0.9646 | 0.8171 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
leonweber/bunsen_base_last | bb9af07c83c0f0715297d58f5da1d61a276f77c8 | 2022-05-19T11:49:17.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | leonweber | null | leonweber/bunsen_base_last | 2 | null | transformers | 26,049 | Entry not found |
MeshalAlamr/wav2vec2-xls-r-300m-ar-9 | 37fa7ec2e6c0d23e7225e1b33b9b0a607b70f6a5 | 2022-05-23T07:54:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | MeshalAlamr | null | MeshalAlamr/wav2vec2-xls-r-300m-ar-9 | 2 | null | transformers | 26,050 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-ar-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar-9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 86.4276
- Wer: 0.1947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 6312.2087 | 4.71 | 400 | 616.6482 | 1.0 |
| 1928.3641 | 9.41 | 800 | 135.8992 | 0.6373 |
| 502.0017 | 14.12 | 1200 | 84.4729 | 0.3781 |
| 299.4288 | 18.82 | 1600 | 76.2488 | 0.3132 |
| 224.0057 | 23.53 | 2000 | 77.6899 | 0.2868 |
| 183.0379 | 28.24 | 2400 | 77.7943 | 0.2725 |
| 160.6119 | 32.94 | 2800 | 79.4487 | 0.2643 |
| 142.7342 | 37.65 | 3200 | 81.3426 | 0.2523 |
| 127.1061 | 42.35 | 3600 | 83.4995 | 0.2489 |
| 114.0666 | 47.06 | 4000 | 82.9293 | 0.2416 |
| 108.4024 | 51.76 | 4400 | 78.6118 | 0.2330 |
| 99.6215 | 56.47 | 4800 | 87.1001 | 0.2328 |
| 95.5135 | 61.18 | 5200 | 84.0371 | 0.2260 |
| 88.2917 | 65.88 | 5600 | 85.9637 | 0.2278 |
| 82.5884 | 70.59 | 6000 | 81.7456 | 0.2237 |
| 77.6827 | 75.29 | 6400 | 88.2686 | 0.2184 |
| 73.313 | 80.0 | 6800 | 85.1965 | 0.2183 |
| 69.61 | 84.71 | 7200 | 86.1655 | 0.2100 |
| 65.6991 | 89.41 | 7600 | 84.0606 | 0.2106 |
| 62.6059 | 94.12 | 8000 | 83.8724 | 0.2036 |
| 57.8635 | 98.82 | 8400 | 85.2078 | 0.2012 |
| 55.2126 | 103.53 | 8800 | 86.6009 | 0.2021 |
| 53.1746 | 108.24 | 9200 | 88.4284 | 0.1975 |
| 52.3969 | 112.94 | 9600 | 85.2846 | 0.1972 |
| 49.8386 | 117.65 | 10000 | 86.4276 | 0.1947 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
mmillet/rubert-tiny2_finetuned_emotion_experiment | 13cc90539a92213b84c8df16aeab5bb5222ed70e | 2022-06-03T20:03:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mmillet | null | mmillet/rubert-tiny2_finetuned_emotion_experiment | 2 | null | transformers | 26,051 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-tiny2_finetuned_emotion_experiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2_finetuned_emotion_experiment
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3947
- Accuracy: 0.8616
- F1: 0.8577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.651 | 1.0 | 54 | 0.5689 | 0.8172 | 0.8008 |
| 0.5355 | 2.0 | 108 | 0.4842 | 0.8486 | 0.8349 |
| 0.4561 | 3.0 | 162 | 0.4436 | 0.8590 | 0.8509 |
| 0.4133 | 4.0 | 216 | 0.4203 | 0.8590 | 0.8528 |
| 0.3709 | 5.0 | 270 | 0.4071 | 0.8564 | 0.8515 |
| 0.3346 | 6.0 | 324 | 0.3980 | 0.8564 | 0.8529 |
| 0.3153 | 7.0 | 378 | 0.3985 | 0.8590 | 0.8565 |
| 0.302 | 8.0 | 432 | 0.3967 | 0.8642 | 0.8619 |
| 0.2774 | 9.0 | 486 | 0.3958 | 0.8616 | 0.8575 |
| 0.2728 | 10.0 | 540 | 0.3959 | 0.8668 | 0.8644 |
| 0.2427 | 11.0 | 594 | 0.3962 | 0.8590 | 0.8550 |
| 0.2425 | 12.0 | 648 | 0.3959 | 0.8642 | 0.8611 |
| 0.2414 | 13.0 | 702 | 0.3959 | 0.8642 | 0.8611 |
| 0.2249 | 14.0 | 756 | 0.3949 | 0.8616 | 0.8582 |
| 0.2391 | 15.0 | 810 | 0.3947 | 0.8616 | 0.8577 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
jjezabek/roberta-base-sst | 48aba7899b8318614500f8e67169d43fd78265d4 | 2022-05-19T19:41:54.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | jjezabek | null | jjezabek/roberta-base-sst | 2 | null | transformers | 26,052 | Entry not found |
jjezabek/roberta-base-sst_bin | 1934462962b7da3002430e41e15ec5535442646e | 2022-05-19T19:49:01.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | jjezabek | null | jjezabek/roberta-base-sst_bin | 2 | null | transformers | 26,053 | Entry not found |
jjezabek/bert-base-uncased-yelp_bin | f01b11624af55161907e002138c18a2622bb3e56 | 2022-05-19T19:53:20.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jjezabek | null | jjezabek/bert-base-uncased-yelp_bin | 2 | null | transformers | 26,054 | Entry not found |
triet1102/xlm-roberta-base-finetuned-panx-de | 1573815d0c3345e70b6fddd3902e07b2ef6c751b | 2022-05-19T21:15:51.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | triet1102 | null | triet1102/xlm-roberta-base-finetuned-panx-de | 2 | null | transformers | 26,055 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
jjezabek/roberta-base-yelp_bin | f5156a912931a3b8b292d2540ea383103d377131 | 2022-05-19T20:48:56.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | jjezabek | null | jjezabek/roberta-base-yelp_bin | 2 | null | transformers | 26,056 | Entry not found |
jjezabek/roberta-base-yelp_full | 07eb9f4a72d0642f8b6394b5d2c3ed716fa8d64b | 2022-05-19T20:49:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | jjezabek | null | jjezabek/roberta-base-yelp_full | 2 | null | transformers | 26,057 | Entry not found |
jianxun/distilbert-base-uncased-finetuned-emotion | f74a32bd4010580ecf2e4da0d0e8b5110eea1edb | 2022-06-20T23:11:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | jianxun | null | jianxun/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 26,058 | Entry not found |
PontifexMaximus/opus-mt-tr-en-finetuned-az-to-en | 84d10e6a79c4051a96265701c8bef7d26dcdfec4 | 2022-05-20T08:13:37.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:turkic_xwmt",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | PontifexMaximus | null | PontifexMaximus/opus-mt-tr-en-finetuned-az-to-en | 2 | null | transformers | 26,059 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- turkic_xwmt
metrics:
- bleu
model-index:
- name: opus-mt-tr-en-finetuned-az-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: turkic_xwmt
type: turkic_xwmt
args: az-en
metrics:
- name: Bleu
type: bleu
value: 0.0002
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-tr-en-finetuned-az-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tr-en](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en) on the turkic_xwmt dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0002
- Gen Len: 511.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 38 | nan | 0.0002 | 511.0 |
| No log | 2.0 | 76 | nan | 0.0002 | 511.0 |
| No log | 3.0 | 114 | nan | 0.0002 | 511.0 |
| No log | 4.0 | 152 | nan | 0.0002 | 511.0 |
| No log | 5.0 | 190 | nan | 0.0002 | 511.0 |
| No log | 6.0 | 228 | nan | 0.0002 | 511.0 |
| No log | 7.0 | 266 | nan | 0.0002 | 511.0 |
| No log | 8.0 | 304 | nan | 0.0002 | 511.0 |
| No log | 9.0 | 342 | nan | 0.0002 | 511.0 |
| No log | 10.0 | 380 | nan | 0.0002 | 511.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
totoro4007/cryptoroberta-base | 639073072bc27ea2edffebe6717637f8dd52463c | 2022-05-20T07:37:33.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | totoro4007 | null | totoro4007/cryptoroberta-base | 2 | null | transformers | 26,060 | Entry not found |
tornqvistmax/XLMR_finetuned_cluster4 | 7123f02697cf021a860744a784183fb1408d129f | 2022-05-20T13:46:07.000Z | [
"pytorch",
"xlm-roberta-xl",
"text-classification",
"transformers"
] | text-classification | false | tornqvistmax | null | tornqvistmax/XLMR_finetuned_cluster4 | 2 | null | transformers | 26,061 | Entry not found |
HueyNemud/das22-43-camembert_pretrained_finetuned_pero | 359d820b9f6cdcff866a29caab778cefa21f8bc2 | 2022-05-20T16:21:33.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | HueyNemud | null | HueyNemud/das22-43-camembert_pretrained_finetuned_pero | 2 | null | transformers | 26,062 | ---
tags:
- generated_from_trainer
model-index:
- name: CamemBERT pretrained on french trade directories from the XIXth century
results: []
---
# CamemBERT trained and fine-tuned for NER on french trade directories from the XIXth century [PERO-OCR training set]
This mdoel is part of the material of the paper
> Abadie, N., Carlinet, E., Chazalon, J., Duménieu, B. (2022). A
> Benchmark of Named Entity Recognition Approaches in Historical
> Documents Application to 19𝑡ℎ Century French Directories. In: Uchida,
> S., Barney, E., Eglin, V. (eds) Document Analysis Systems. DAS 2022.
> Lecture Notes in Computer Science, vol 13237. Springer, Cham.
> https://doi.org/10.1007/978-3-031-06555-2_30
The source code to train this model is available on the [GitHub repository](https://github.com/soduco/paper-ner-bench-das22) of the paper as a Jupyter notebook in `src/ner/40_experiment_2.ipynb`.
## Model description
This model adapts the model [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for NER on 6004 manually annotated directory entries referred as the "reference dataset" in the paper.
Trade directory entries are short and strongly structured texts that giving the name, activity and location of a person or business, e.g:
```
Peynaud, R. de la Vieille Bouclerie, 18. Richard, Joullain et comp., (commission- —Phéâtre Français. naire, (entrepôt), au port de la Rapée-
```
## Intended uses & limitations
This model is intended for reproducibility of the NER evaluation published in the DAS2022 paper.
Several derived models trained for NER on trade directories are available on HuggingFace, each trained on a different dataset :
- [das22-10-camembert_pretrained_finetuned_ref](): trained for NER on ~6000 directory entries manually corrected.
- [das22-10-camembert_pretrained_finetuned_pero](): trained for NER on ~6000 directory entries extracted with PERO-OCR.
- [das22-10-camembert_pretrained_finetuned_tess](): trained for NER on ~6000 directory entries extracted with Tesseract.
### Training hyperparameters
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
HueyNemud/das22-41-camembert_pretrained_finetuned_ref | ac4f93a35b88df4df2f74354cbe10340ba9490b3 | 2022-05-20T16:27:58.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | HueyNemud | null | HueyNemud/das22-41-camembert_pretrained_finetuned_ref | 2 | null | transformers | 26,063 | ---
tags:
- generated_from_trainer
model-index:
- name: CamemBERT pretrained on french trade directories from the XIXth century
results: []
---
# CamemBERT pretrained and trained for NER on french trade directories from the XIXth century [GOLD training set]
This mdoel is part of the material of the paper
> Abadie, N., Carlinet, E., Chazalon, J., Duménieu, B. (2022). A
> Benchmark of Named Entity Recognition Approaches in Historical
> Documents Application to 19𝑡ℎ Century French Directories. In: Uchida,
> S., Barney, E., Eglin, V. (eds) Document Analysis Systems. DAS 2022.
> Lecture Notes in Computer Science, vol 13237. Springer, Cham.
> https://doi.org/10.1007/978-3-031-06555-2_30
The source code to train this model is available on the [GitHub repository](https://github.com/soduco/paper-ner-bench-das22) of the paper as a Jupyter notebook in `src/ner/40_experiment_2.ipynb`.
## Model description
This model adapts the pre-trained model [das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for NER on 6004 manually annotated directory entries referred as the "reference dataset" in the paper.
Trade directory entries are short and strongly structured texts that giving the name, activity and location of a person or business, e.g:
```
Peynaud, R. de la Vieille Bouclerie, 18. Richard, Joullain et comp., (commission- —Phéâtre Français. naire, (entrepôt), au port de la Rapée-
```
## Intended uses & limitations
This model is intended for reproducibility of the NER evaluation published in the DAS2022 paper.
Several derived models trained for NER on trade directories are available on HuggingFace, each trained on a different dataset :
- [das22-10-camembert_pretrained_finetuned_ref](): trained for NER on ~6000 directory entries manually corrected.
- [das22-10-camembert_pretrained_finetuned_pero](): trained for NER on ~6000 directory entries extracted with PERO-OCR.
- [das22-10-camembert_pretrained_finetuned_tess](): trained for NER on ~6000 directory entries extracted with Tesseract.
### Training hyperparameters
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
okovtun/bert-emotion | 67b9ebf401fe96738d962ecdeeb2c0efc0f8db06 | 2022-05-23T03:51:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | okovtun | null | okovtun/bert-emotion | 2 | null | transformers | 26,064 | Entry not found |
Ukhushn/ukhushn | 156f38a4eb5d89ff33f5f3e42042ae876c85cb2c | 2022-05-20T19:28:31.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | Ukhushn | null | Ukhushn/ukhushn | 2 | null | sentence-transformers | 26,065 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Ukhushn/ukhushn
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Ukhushn/ukhushn')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Ukhushn/ukhushn')
model = AutoModel.from_pretrained('Ukhushn/ukhushn')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Ukhushn/ukhushn)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6661 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2665,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Kuaaangwen/distilroberta-base-finetuned-chemistry | 3c9ac47d8c5413e8e7fcafacc99bbc28f74cb72f | 2022-05-22T08:56:27.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Kuaaangwen | null | Kuaaangwen/distilroberta-base-finetuned-chemistry | 2 | null | transformers | 26,066 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-chemistry
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-chemistry
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 259 | 1.9385 |
| 2.148 | 2.0 | 518 | 1.7923 |
| 2.148 | 3.0 | 777 | 1.7691 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Kuaaangwen/distilroberta-base-finetuned-chemistry-with-new-tokenizer | ce3b095b1d58185f32d385a1e5978c017232c2a5 | 2022-05-21T10:36:36.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Kuaaangwen | null | Kuaaangwen/distilroberta-base-finetuned-chemistry-with-new-tokenizer | 2 | null | transformers | 26,067 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-chemistry-with-new-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-chemistry-with-new-tokenizer
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 224 | 5.1474 |
| No log | 2.0 | 448 | 4.9120 |
| 5.4707 | 3.0 | 672 | 4.8450 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Leizhang/xlm-roberta-base-finetuned-panx-de | 9aefb5d9f60fd43f3a2fe83ae1fdcc7d2c927a16 | 2022-05-22T12:51:10.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Leizhang | null | Leizhang/xlm-roberta-base-finetuned-panx-de | 2 | null | transformers | 26,068 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
stevemobs/bert-base-spanish-wwm-uncased-finetuned-squad_es | c8ce07505cee089d3239bb1bb522e318adcc046a | 2022-05-22T03:38:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_es",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/bert-base-spanish-wwm-uncased-finetuned-squad_es | 2 | null | transformers | 26,069 | ---
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: bert-base-spanish-wwm-uncased-finetuned-squad_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-uncased-finetuned-squad_es
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5377 | 1.0 | 8259 | 1.4632 |
| 1.1928 | 2.0 | 16518 | 1.5536 |
| 0.9486 | 3.0 | 24777 | 1.7747 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
himanshubeniwal/distilbert-base-uncased-finetuned-cola | 46d3ce0c60d15466a3e3c6c100757a809f5a7d39 | 2022-05-22T08:48:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | himanshubeniwal | null | himanshubeniwal/distilbert-base-uncased-finetuned-cola | 2 | 1 | transformers | 26,070 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5383825234212567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8011
- Matthews Correlation: 0.5384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5215 | 1.0 | 535 | 0.5279 | 0.4360 |
| 0.3478 | 2.0 | 1070 | 0.5187 | 0.4925 |
| 0.2348 | 3.0 | 1605 | 0.5646 | 0.5341 |
| 0.1741 | 4.0 | 2140 | 0.7430 | 0.5361 |
| 0.1253 | 5.0 | 2675 | 0.8011 | 0.5384 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Chen1999/distilbert-base-uncased-finetuned-imdb | 25aefefe5250055f8cbb56d5caa5e1969658372f | 2022-05-22T07:32:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Chen1999 | null | Chen1999/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 26,071 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 0.99 | 156 | 2.5081 |
| 2.5795 | 1.99 | 312 | 2.4608 |
| 2.5257 | 2.98 | 468 | 2.4520 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Minenine/distilbert-base-uncased-finetuned-imdb | a1f8d3959bb9af8159b926c32a0a2a72a09e6058 | 2022-05-22T08:10:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Minenine | null | Minenine/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 26,072 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
moghis/xlm-roberta-base-finetuned-panx-fr | 2fbe952f51a847888267874637adaf9243cdf1b1 | 2022-05-22T12:18:03.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | moghis | null | moghis/xlm-roberta-base-finetuned-panx-fr | 2 | null | transformers | 26,073 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2684
- F1 Score: 0.8380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5416 | 1.0 | 191 | 0.3088 | 0.7953 |
| 0.2614 | 2.0 | 382 | 0.2822 | 0.8310 |
| 0.1758 | 3.0 | 573 | 0.2684 | 0.8380 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
stevemobs/distilbert-base-uncased-finetuned-squad-finetuned-squad_adversarial | e6f89059d5281c5e67915001bb03c7c4ae65e4ef | 2022-05-22T12:13:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:adversarial_qa",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/distilbert-base-uncased-finetuned-squad-finetuned-squad_adversarial | 2 | null | transformers | 26,074 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- adversarial_qa
model-index:
- name: distilbert-base-uncased-finetuned-squad-finetuned-squad_adversarial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-finetuned-squad_adversarial
This model is a fine-tuned version of [stevemobs/distilbert-base-uncased-finetuned-squad](https://huggingface.co/stevemobs/distilbert-base-uncased-finetuned-squad) on the adversarial_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6352 | 1.0 | 1896 | 2.2623 |
| 2.1121 | 2.0 | 3792 | 2.2465 |
| 1.7932 | 3.0 | 5688 | 2.3121 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
moghis/xlm-roberta-base-finetuned-panx-en | befcbc5bd8ad95013b7370dcb79ad5e6a263979c | 2022-05-22T12:48:39.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | moghis | null | moghis/xlm-roberta-base-finetuned-panx-en | 2 | null | transformers | 26,075 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3932
- F1 Score: 0.6774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0236 | 1.0 | 50 | 0.5462 | 0.5109 |
| 0.5047 | 2.0 | 100 | 0.4387 | 0.6370 |
| 0.3716 | 3.0 | 150 | 0.3932 | 0.6774 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
versae/mdeberta-v3-base-finetuned-recores | fb8f15c05c7195b0a0af1d5468b075ce2ff410ed | 2022-05-22T20:34:41.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | multiple-choice | false | versae | null | versae/mdeberta-v3-base-finetuned-recores | 2 | null | transformers | 26,076 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-finetuned-recores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-finetuned-recores
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6094
- Accuracy: 0.2011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6112 | 1.0 | 1047 | 1.6094 | 0.1901 |
| 1.608 | 2.0 | 2094 | 1.6094 | 0.1873 |
| 1.6127 | 3.0 | 3141 | 1.6095 | 0.1983 |
| 1.6125 | 4.0 | 4188 | 1.6094 | 0.2424 |
| 1.6118 | 5.0 | 5235 | 1.6094 | 0.1956 |
| 1.6181 | 6.0 | 6282 | 1.6094 | 0.2094 |
| 1.6229 | 7.0 | 7329 | 1.6095 | 0.1680 |
| 1.6125 | 8.0 | 8376 | 1.6094 | 0.1736 |
| 1.6134 | 9.0 | 9423 | 1.6094 | 0.2066 |
| 1.6174 | 10.0 | 10470 | 1.6093 | 0.2204 |
| 1.6161 | 11.0 | 11517 | 1.6096 | 0.2121 |
| 1.6198 | 12.0 | 12564 | 1.6094 | 0.2039 |
| 1.6182 | 13.0 | 13611 | 1.6094 | 0.2287 |
| 1.6208 | 14.0 | 14658 | 1.6094 | 0.2287 |
| 1.6436 | 15.0 | 15705 | 1.6092 | 0.2287 |
| 1.6209 | 16.0 | 16752 | 1.6094 | 0.2094 |
| 1.6097 | 17.0 | 17799 | 1.6094 | 0.2094 |
| 1.6115 | 18.0 | 18846 | 1.6094 | 0.2149 |
| 1.6249 | 19.0 | 19893 | 1.6094 | 0.1956 |
| 1.6201 | 20.0 | 20940 | 1.6094 | 0.1763 |
| 1.6217 | 21.0 | 21987 | 1.6094 | 0.1956 |
| 1.6193 | 22.0 | 23034 | 1.6094 | 0.1846 |
| 1.6171 | 23.0 | 24081 | 1.6095 | 0.1983 |
| 1.6123 | 24.0 | 25128 | 1.6095 | 0.1846 |
| 1.6164 | 25.0 | 26175 | 1.6094 | 0.2011 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.10.1+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
HomerChatbot/HomerSimpson | 3adf4fa95efcef036deb37a820a6ae4024d6732a | 2022-05-25T02:48:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | HomerChatbot | null | HomerChatbot/HomerSimpson | 2 | null | transformers | 26,077 | ---
tags:
- conversational
---
# Homer Simpson Chatbot |
globuslabs/ScholarBERT-XL_1 | f68001f152e72c6612dd6241b7b2f2288528dafe | 2022-05-24T03:14:00.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"arxiv:2205.11342",
"transformers",
"science",
"multi-displinary",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | globuslabs | null | globuslabs/ScholarBERT-XL_1 | 2 | null | transformers | 26,078 | ---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT-XL_1 Model
This is the **ScholarBERT-XL_1** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**2.2B tokens**).
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model has a total of 770M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 36 |
| Hidden Size | 1280 |
| Attention Heads | 20 |
| Total Parameters | 770M |
# Training Dataset
The vocab and the model are pertrained on **1% of the PRD** scientific literature dataset.
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2022scholarbert,
doi = {10.48550/ARXIV.2205.11342},
url = {https://arxiv.org/abs/2205.11342},
author = {Hong, Zhi and Ajith, Aswathy and Pauloski, Gregory and Duede, Eamon and Malamud, Carl and Magoulas, Roger and Chard, Kyle and Foster, Ian},
title = {ScholarBERT: Bigger is Not Always Better},
publisher = {arXiv},
year = {2022}
}
``` |
krotima1/mbart-ht2a-s | ca2683630ec79f3ca2af34756b70e644f52d1e2e | 2022-05-23T20:38:00.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"cs",
"dataset:SumeCzech dataset news-based",
"transformers",
"abstractive summarization",
"mbart-cc25",
"Czech",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | krotima1 | null | krotima1/mbart-ht2a-s | 2 | null | transformers | 26,079 | ---
language:
- cs
- cs
tags:
- abstractive summarization
- mbart-cc25
- Czech
license: apache-2.0
datasets:
- SumeCzech dataset news-based
metrics:
- rouge
- rougeraw
---
# mBART fine-tuned model for Czech abstractive summarization (HT2A-S)
This model is a fine-tuned checkpoint of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the Czech news dataset to produce Czech abstractive summaries.
## Task
The model deals with the task ``Headline + Text to Abstract`` (HT2A) which consists in generating a multi-sentence summary considered as an abstract from a Czech news text.
## Dataset
The model has been trained on the [SumeCzech](https://ufal.mff.cuni.cz/sumeczech) dataset. The dataset includes around 1M Czech news-based documents consisting of a Headline, Abstract, and Full-text sections. Truncation and padding were configured for 512 tokens for the encoder and 128 for the decoder.
## Training
The model has been trained on 1x NVIDIA Tesla A100 40GB for 20 hours, 1x NVIDIA Tesla V100 32GB for 40 hours, and 4x NVIDIA Tesla A100 40GB for 20 hours. During training, the model has seen 6928K documents corresponding to roughly 8 epochs.
# Use
Assuming you are using the provided Summarizer.ipynb file.
```python
def summ_config():
cfg = OrderedDict([
# summarization model - checkpoint from website
("model_name", "krotima1/mbart-ht2a-s"),
("inference_cfg", OrderedDict([
("num_beams", 4),
("top_k", 40),
("top_p", 0.92),
("do_sample", True),
("temperature", 0.89),
("repetition_penalty", 1.2),
("no_repeat_ngram_size", None),
("early_stopping", True),
("max_length", 128),
("min_length", 10),
])),
#texts to summarize
("text",
[
"Input your Czech text",
]
),
])
return cfg
cfg = summ_config()
#load model
model = AutoModelForSeq2SeqLM.from_pretrained(cfg["model_name"])
tokenizer = AutoTokenizer.from_pretrained(cfg["model_name"])
# init summarizer
summarize = Summarizer(model, tokenizer, cfg["inference_cfg"])
summarize(cfg["text"])
``` |
krotima1/mbart-at2h-s | aacc4ec6ca1a35d673097526d43504e35579979f | 2022-05-23T20:36:30.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"cs",
"dataset:SumeCzech dataset news-based",
"transformers",
"abstractive summarization",
"mbart-cc25",
"Czech",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | krotima1 | null | krotima1/mbart-at2h-s | 2 | null | transformers | 26,080 | ---
language:
- cs
- cs
tags:
- abstractive summarization
- mbart-cc25
- Czech
license: apache-2.0
datasets:
- SumeCzech dataset news-based
metrics:
- rouge
- rougeraw
---
# mBART fine-tuned model for Czech abstractive summarization (AT2H-S)
This model is a fine-tuned checkpoint of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the Czech news dataset to produce Czech abstractive summaries.
## Task
The model deals with the task ``Abstract + Text to Headline`` (AT2H) which consists in generating a one- or two-sentence summary considered as a headline from a Czech news text.
## Dataset
The model has been trained on the [SumeCzech](https://ufal.mff.cuni.cz/sumeczech) dataset. The dataset includes around 1M Czech news-based documents consisting of a Headline, Abstract, and Full-text sections. Truncation and padding were configured for 512 tokens for the encoder and 64 for the decoder.
## Training
The model has been trained on 1x NVIDIA Tesla A100 40GB for 40 hours. During training, the model has seen 2576K documents corresponding to roughly 3 epochs.
# Use
Assuming you are using the provided Summarizer.ipynb file.
```python
def summ_config():
cfg = OrderedDict([
# summarization model - checkpoint from website
("model_name", "krotima1/mbart-at2h-s"),
("inference_cfg", OrderedDict([
("num_beams", 4),
("top_k", 40),
("top_p", 0.92),
("do_sample", True),
("temperature", 0.89),
("repetition_penalty", 1.2),
("no_repeat_ngram_size", None),
("early_stopping", True),
("max_length", 64),
("min_length", 10),
])),
#texts to summarize
("text",
[
"Input your Czech text",
]
),
])
return cfg
cfg = summ_config()
#load model
model = AutoModelForSeq2SeqLM.from_pretrained(cfg["model_name"])
tokenizer = AutoTokenizer.from_pretrained(cfg["model_name"])
# init summarizer
summarize = Summarizer(model, tokenizer, cfg["inference_cfg"])
summarize(cfg["text"])
``` |
jinesh90/distilbert-base-uncased-finetuned-emotinons-jinesh | 826aec13ae978fb37751645247a4a09e3603deea | 2022-05-22T23:55:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jinesh90 | null | jinesh90/distilbert-base-uncased-finetuned-emotinons-jinesh | 2 | null | transformers | 26,081 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotinons-jinesh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotinons-jinesh
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8177 | 1.0 | 250 | 0.3146 | 0.904 | 0.9009 |
| 0.246 | 2.0 | 500 | 0.2175 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
hamidov02/wav2vec2-large-xls-r-300m-turkish-colab | 5aebcff85fd8c3e08ec1651305067b73e53b0252 | 2022-05-24T00:05:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hamidov02 | null | hamidov02/wav2vec2-large-xls-r-300m-turkish-colab | 2 | null | transformers | 26,082 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3701
- Wer: 0.2946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8287 | 3.67 | 400 | 0.6628 | 0.6928 |
| 0.3926 | 7.34 | 800 | 0.4257 | 0.4716 |
| 0.1847 | 11.01 | 1200 | 0.4034 | 0.3931 |
| 0.1273 | 14.68 | 1600 | 0.4094 | 0.3664 |
| 0.0991 | 18.35 | 2000 | 0.4133 | 0.3375 |
| 0.0811 | 22.02 | 2400 | 0.4021 | 0.3301 |
| 0.0646 | 25.69 | 2800 | 0.3949 | 0.3166 |
| 0.0513 | 29.36 | 3200 | 0.3701 | 0.2946 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
GiordanoB/mT5_multilingual_XLSum-finetuned-summarization-V2 | 13b408c27fafd0dc1341505cdeed8569572f762f | 2022-05-23T07:17:22.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | GiordanoB | null | GiordanoB/mT5_multilingual_XLSum-finetuned-summarization-V2 | 2 | null | transformers | 26,083 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-finetuned-summarization-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-summarization-V2
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5523
- Rouge1: 25.8727
- Rouge2: 16.1688
- Rougel: 19.8093
- Rougelsum: 23.4429
- Gen Len: 34.4286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 13 | 1.8850 | 23.9901 | 12.4882 | 17.2823 | 20.8977 | 31.2857 |
| No log | 2.0 | 26 | 1.5894 | 25.1547 | 14.8857 | 19.2203 | 22.9079 | 31.8571 |
| No log | 3.0 | 39 | 1.5523 | 25.8727 | 16.1688 | 19.8093 | 23.4429 | 34.4286 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/samsum_percent10_minsimdel | 799a3b205bebde4d0859b2017f33bf3dbca376e0 | 2022-05-23T06:05:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_percent10_minsimdel | 2 | null | transformers | 26,084 | Entry not found |
PSW/samsum_percent10_maxsimins | 1d4e623674ddea6d3f5a3ccbc383ae87a85847b5 | 2022-05-23T06:34:21.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_percent10_maxsimins | 2 | null | transformers | 26,085 | Entry not found |
chrisvinsen/wav2vec2-9 | d2c73707418c236ff587f3e58c86b39c1225027a | 2022-05-23T13:52:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-9 | 2 | null | transformers | 26,086 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0821
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.2803 | 1.56 | 200 | 3.1231 | 1.0 |
| 2.8809 | 3.12 | 400 | 3.0366 | 1.0 |
| 2.8761 | 4.69 | 600 | 3.1217 | 1.0 |
| 2.8641 | 6.25 | 800 | 3.0584 | 1.0 |
| 2.866 | 7.81 | 1000 | 3.0318 | 1.0 |
| 2.865 | 9.38 | 1200 | 3.0789 | 1.0 |
| 2.8642 | 10.94 | 1400 | 3.0560 | 1.0 |
| 2.8617 | 12.5 | 1600 | 2.9985 | 1.0 |
| 2.8573 | 14.06 | 1800 | 3.1928 | 1.0 |
| 2.8609 | 15.62 | 2000 | 3.0782 | 1.0 |
| 2.8605 | 17.19 | 2200 | 3.1244 | 1.0 |
| 2.8638 | 18.75 | 2400 | 3.0417 | 1.0 |
| 2.8578 | 20.31 | 2600 | 3.1586 | 1.0 |
| 2.8579 | 21.88 | 2800 | 3.0409 | 1.0 |
| 2.8569 | 23.44 | 3000 | 3.0537 | 1.0 |
| 2.8574 | 25.0 | 3200 | 3.0105 | 1.0 |
| 2.8536 | 26.56 | 3400 | 3.0901 | 1.0 |
| 2.8571 | 28.12 | 3600 | 3.0904 | 1.0 |
| 2.8532 | 29.69 | 3800 | 3.0821 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
chrisvinsen/wav2vec2-10 | 7a9b4442d37b712e8b28b3617c469d86b2526371 | 2022-05-23T19:05:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-10 | 2 | null | transformers | 26,087 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-10
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0354
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.2231 | 0.78 | 200 | 3.0442 | 1.0 |
| 2.8665 | 1.57 | 400 | 3.0081 | 1.0 |
| 2.8596 | 2.35 | 600 | 3.0905 | 1.0 |
| 2.865 | 3.14 | 800 | 3.0443 | 1.0 |
| 2.8613 | 3.92 | 1000 | 3.0316 | 1.0 |
| 2.8601 | 4.71 | 1200 | 3.0574 | 1.0 |
| 2.8554 | 5.49 | 1400 | 3.0261 | 1.0 |
| 2.8592 | 6.27 | 1600 | 3.0785 | 1.0 |
| 2.8606 | 7.06 | 1800 | 3.1129 | 1.0 |
| 2.8547 | 7.84 | 2000 | 3.0647 | 1.0 |
| 2.8565 | 8.63 | 2200 | 3.0624 | 1.0 |
| 2.8633 | 9.41 | 2400 | 2.9900 | 1.0 |
| 2.855 | 10.2 | 2600 | 3.0084 | 1.0 |
| 2.8581 | 10.98 | 2800 | 3.0092 | 1.0 |
| 2.8545 | 11.76 | 3000 | 3.0299 | 1.0 |
| 2.8583 | 12.55 | 3200 | 3.0293 | 1.0 |
| 2.8536 | 13.33 | 3400 | 3.0566 | 1.0 |
| 2.8556 | 14.12 | 3600 | 3.0385 | 1.0 |
| 2.8573 | 14.9 | 3800 | 3.0098 | 1.0 |
| 2.8551 | 15.69 | 4000 | 3.0623 | 1.0 |
| 2.8546 | 16.47 | 4200 | 3.0964 | 1.0 |
| 2.8569 | 17.25 | 4400 | 3.0648 | 1.0 |
| 2.8543 | 18.04 | 4600 | 3.0377 | 1.0 |
| 2.8532 | 18.82 | 4800 | 3.0454 | 1.0 |
| 2.8579 | 19.61 | 5000 | 3.0301 | 1.0 |
| 2.8532 | 20.39 | 5200 | 3.0364 | 1.0 |
| 2.852 | 21.18 | 5400 | 3.0187 | 1.0 |
| 2.8561 | 21.96 | 5600 | 3.0172 | 1.0 |
| 2.8509 | 22.75 | 5800 | 3.0420 | 1.0 |
| 2.8551 | 23.53 | 6000 | 3.0309 | 1.0 |
| 2.8552 | 24.31 | 6200 | 3.0416 | 1.0 |
| 2.8521 | 25.1 | 6400 | 3.0469 | 1.0 |
| 2.852 | 25.88 | 6600 | 3.0489 | 1.0 |
| 2.854 | 26.67 | 6800 | 3.0394 | 1.0 |
| 2.8572 | 27.45 | 7000 | 3.0336 | 1.0 |
| 2.8502 | 28.24 | 7200 | 3.0363 | 1.0 |
| 2.8557 | 29.02 | 7400 | 3.0304 | 1.0 |
| 2.8522 | 29.8 | 7600 | 3.0354 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jeremyccollinsmpi/autotrain-inference_probability_3-900329401 | 8c43f533c2be72b63090692aba11323f1d547fa1 | 2022-05-23T16:04:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:jeremyccollinsmpi/autotrain-data-inference_probability_3",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | jeremyccollinsmpi | null | jeremyccollinsmpi/autotrain-inference_probability_3-900329401 | 2 | null | transformers | 26,088 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- jeremyccollinsmpi/autotrain-data-inference_probability_3
co2_eq_emissions: 3.807314953201688
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 900329401
- CO2 Emissions (in grams): 3.807314953201688
## Validation Metrics
- Loss: 0.06255918741226196
- Rouge1: 94.0693
- Rouge2: 0.0
- RougeL: 94.0693
- RougeLsum: 94.1126
- Gen Len: 2.8528
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jeremyccollinsmpi/autotrain-inference_probability_3-900329401
``` |
CEBaB/gpt2.CEBaB.causalm.None__None.2-class.exclusive.seed_42 | 2602845d6eb691cad4e755611eb565c7aedc208c | 2022-05-24T10:04:25.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.None__None.2-class.exclusive.seed_42 | 2 | null | transformers | 26,089 | Entry not found |
CEBaB/gpt2.CEBaB.causalm.None__None.2-class.exclusive.seed_43 | 6da6204a1dcfccc49d3d4a3ff91eb067ba2b0174 | 2022-05-24T10:04:27.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.None__None.2-class.exclusive.seed_43 | 2 | null | transformers | 26,090 | Entry not found |
CEBaB/gpt2.CEBaB.causalm.None__None.2-class.exclusive.seed_46 | 41305e60eafbd9e8cfa8ca852e5f650282de623b | 2022-05-24T10:04:33.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.None__None.2-class.exclusive.seed_46 | 2 | null | transformers | 26,091 | Entry not found |
CEBaB/gpt2.CEBaB.causalm.None__None.3-class.exclusive.seed_44 | fe14aa929621bf955f74c6dbd492b76906968df0 | 2022-05-24T10:07:48.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.None__None.3-class.exclusive.seed_44 | 2 | null | transformers | 26,092 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.noise__food.2-class.exclusive.seed_42 | e7693ee902c31a10321162b5ef63f5791956c91a | 2022-05-24T12:10:39.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.noise__food.2-class.exclusive.seed_42 | 2 | null | transformers | 26,093 | Entry not found |
Mich/distilbert-base-uncased-finetuned-imdb | b3757313cf018a55158e254708c469d5adbdf7c7 | 2022-05-23T19:17:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Mich | null | Mich/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 26,094 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8574 | 1.0 | 32 | 2.6973 |
| 2.7248 | 2.0 | 64 | 2.5887 |
| 2.7313 | 3.0 | 96 | 2.6203 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
CEBaB/bert-base-uncased.CEBaB.causalm.service__food.2-class.exclusive.seed_42 | 7421d058c93664df068cfc9fbcad1154e472ee2c | 2022-05-24T12:12:03.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.service__food.2-class.exclusive.seed_42 | 2 | null | transformers | 26,095 | Entry not found |
CEBaB/gpt2.CEBaB.causalm.None__None.5-class.exclusive.seed_44 | e0f99b9ce42b102a3aafc59006409b444cb16a41 | 2022-05-24T10:11:08.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.None__None.5-class.exclusive.seed_44 | 2 | null | transformers | 26,096 | Entry not found |
CEBaB/gpt2.CEBaB.causalm.None__None.5-class.exclusive.seed_45 | 0062eee6c24b54d8be38bf5fd33a30dc2ce47edf | 2022-05-24T10:11:10.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.None__None.5-class.exclusive.seed_45 | 2 | null | transformers | 26,097 | Entry not found |
CEBaB/gpt2.CEBaB.causalm.ambiance__food.2-class.exclusive.seed_46 | 830086fb179d8e761cd8e938447ae59f6630cdcb | 2022-05-24T10:04:43.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.ambiance__food.2-class.exclusive.seed_46 | 2 | null | transformers | 26,098 | Entry not found |
CEBaB/gpt2.CEBaB.causalm.food__service.2-class.exclusive.seed_45 | 589d85698ccb4c2ac89be0698018f91549742adf | 2022-05-24T10:04:51.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.food__service.2-class.exclusive.seed_45 | 2 | null | transformers | 26,099 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.