modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ctu-aic/xlm-roberta-large-xnli-enfever_nli | ctu-aic | 2022-10-21T13:52:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:2201.11115",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-21T13:47:11Z | ('---\ndatasets:\n- ctu-aic/enfever_nli\nlanguages:\n- cs\nlicense: cc-by-sa-4.0\ntags:\n- natural-language-inference\n\n---',)
# 🦾 xlm-roberta-large-xnli-enfever_nli
Transformer model for **Natural Language Inference** in ['cs'] languages finetuned on ['ctu-aic/enfever_nli'] datasets.
## 🧰 Usage
### 👾 Using UKPLab `sentence_transformers` `CrossEncoder`
The model was trained using the `CrossEncoder` API and we recommend it for its usage.
```python
from sentence_transformers.cross_encoder import CrossEncoder
model = CrossEncoder('ctu-aic/xlm-roberta-large-xnli-enfever_nli')
scores = model.predict([["My first context.", "My first hypothesis."],
["Second context.", "Hypothesis."]])
```
### 🤗 Using Huggingface `transformers`
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-xnli-enfever_nli")
tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-xnli-enfever_nli")
```
## 🌳 Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## 👬 Authors
The model was trained and uploaded by **[ullriher](https://udb.fel.cvut.cz/?uid=ullriher&sn=&givenname=&_cmd=Hledat&_reqn=1&_type=user&setlang=en)** (e-mail: [[email protected]](mailto:[email protected]))
The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague ([AIC](https://www.aic.fel.cvut.cz/)).
## 🔐 License
[cc-by-sa-4.0](https://choosealicense.com/licenses/cc-by-sa-4.0)
## 💬 Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{DBLP:journals/corr/abs-2201-11115,
author = {Herbert Ullrich and
Jan Drchal and
Martin R{'{y}}par and
Hana Vincourov{'{a}} and
V{'{a}}clav Moravec},
title = {CsFEVER and CTKFacts: Acquiring Czech Data for Fact Verification},
journal = {CoRR},
volume = {abs/2201.11115},
year = {2022},
url = {https://arxiv.org/abs/2201.11115},
eprinttype = {arXiv},
eprint = {2201.11115},
timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
manirai91/enlm-r | manirai91 | 2022-10-21T13:50:54Z | 73 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-01T07:58:38Z | ---
tags:
- generated_from_trainer
model-index:
- name: enlm-r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-r
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 128
- total_train_batch_size: 8192
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 24000
- num_epochs: 81
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.4 | 0.33 | 160 | 10.7903 |
| 6.4 | 0.66 | 320 | 10.1431 |
| 6.4 | 0.99 | 480 | 9.8708 |
| 6.4 | 0.33 | 640 | 9.3884 |
| 6.4 | 0.66 | 800 | 8.7352 |
| 6.4 | 0.99 | 960 | 8.3341 |
| 6.4 | 1.33 | 1120 | 8.0614 |
| 6.4 | 1.66 | 1280 | 7.8582 |
| 4.2719 | 1.99 | 1440 | 7.4879 |
| 3.2 | 3.3 | 1600 | 7.2689 |
| 3.2 | 3.63 | 1760 | 7.1434 |
| 3.2 | 3.96 | 1920 | 7.0576 |
| 3.2 | 4.29 | 2080 | 7.0030 |
| 3.2 | 4.62 | 2240 | 6.9612 |
| 3.2 | 4.95 | 2400 | 6.9394 |
| 3.2 | 5.28 | 2560 | 6.9559 |
| 3.2 | 5.61 | 2720 | 6.8964 |
| 3.2 | 5.94 | 2880 | 6.8939 |
| 3.2 | 6.27 | 3040 | 6.8871 |
| 3.2 | 6.6 | 3200 | 6.8771 |
| 3.2 | 6.93 | 3360 | 6.8617 |
| 3.2 | 7.26 | 3520 | 6.8472 |
| 3.2 | 7.59 | 3680 | 6.8283 |
| 3.2 | 7.92 | 3840 | 6.8082 |
| 3.2 | 8.25 | 4000 | 6.8119 |
| 3.2 | 8.58 | 4160 | 6.7962 |
| 3.2 | 8.91 | 4320 | 6.7751 |
| 3.2 | 9.24 | 4480 | 6.7405 |
| 3.2 | 9.57 | 4640 | 6.7412 |
| 3.2 | 9.9 | 4800 | 6.7279 |
| 3.2 | 10.22 | 4960 | 6.7069 |
| 3.2 | 10.55 | 5120 | 6.6998 |
| 3.2 | 10.88 | 5280 | 6.6875 |
| 3.2 | 11.22 | 5440 | 6.6580 |
| 3.2 | 11.55 | 5600 | 6.6402 |
| 3.2 | 11.88 | 5760 | 6.6281 |
| 3.2 | 12.21 | 5920 | 6.6181 |
| 3.2 | 12.54 | 6080 | 6.5995 |
| 3.2 | 12.87 | 6240 | 6.5970 |
| 3.2 | 13.2 | 6400 | 6.5772 |
| 3.2 | 13.53 | 6560 | 6.5594 |
| 3.2 | 13.85 | 6720 | 6.5400 |
| 3.2 | 14.19 | 6880 | 6.5396 |
| 3.2 | 14.51 | 7040 | 6.5211 |
| 3.2 | 14.84 | 7200 | 6.5140 |
| 3.2 | 15.18 | 7360 | 6.4002 |
| 3.2 | 15.5 | 7520 | 6.3170 |
| 3.2 | 15.83 | 7680 | 6.2621 |
| 3.2 | 16.16 | 7840 | 6.2253 |
| 3.2 | 16.49 | 8000 | 6.1722 |
| 3.2 | 16.82 | 8160 | 6.1106 |
| 3.2 | 17.15 | 8320 | 6.1281 |
| 3.2 | 17.48 | 8480 | 6.0019 |
| 3.2 | 17.81 | 8640 | 5.9069 |
| 3.2 | 18.14 | 8800 | 5.7105 |
| 3.2 | 18.47 | 8960 | 5.2741 |
| 3.2 | 18.8 | 9120 | 5.0369 |
| 5.0352 | 19.13 | 9280 | 4.8148 |
| 4.5102 | 19.26 | 9440 | 4.3175 |
| 4.1247 | 19.59 | 9600 | 3.9518 |
| 3.8443 | 20.12 | 9760 | 3.6712 |
| 3.6334 | 20.45 | 9920 | 3.4654 |
| 3.4698 | 20.78 | 10080 | 3.2994 |
| 3.3267 | 21.11 | 10240 | 3.1638 |
| 3.2173 | 21.44 | 10400 | 3.0672 |
| 3.1255 | 21.77 | 10560 | 2.9687 |
| 3.0344 | 22.1 | 10720 | 2.8865 |
| 2.9645 | 22.43 | 10880 | 2.8104 |
| 2.9046 | 22.76 | 11040 | 2.7497 |
| 2.8707 | 23.09 | 11200 | 2.7040 |
| 2.7903 | 23.42 | 11360 | 2.6416 |
| 2.7339 | 23.75 | 11520 | 2.5891 |
| 2.6894 | 24.08 | 11680 | 2.5370 |
| 2.6461 | 24.41 | 11840 | 2.4960 |
| 2.5976 | 24.74 | 12000 | 2.4508 |
| 2.5592 | 25.07 | 12160 | 2.4194 |
| 2.5305 | 25.4 | 12320 | 2.3790 |
| 2.4993 | 25.73 | 12480 | 2.3509 |
| 2.465 | 26.06 | 12640 | 2.3173 |
| 2.4455 | 26.39 | 12800 | 2.2934 |
| 2.4107 | 26.72 | 12960 | 2.2701 |
| 2.3883 | 27.05 | 13120 | 2.2378 |
| 2.3568 | 27.38 | 13280 | 2.2079 |
| 2.3454 | 27.71 | 13440 | 2.1919 |
| 2.3207 | 28.04 | 13600 | 2.1671 |
| 2.2963 | 28.37 | 13760 | 2.1513 |
| 2.2738 | 28.7 | 13920 | 2.1326 |
| 2.2632 | 29.03 | 14080 | 2.1176 |
| 2.2413 | 29.36 | 14240 | 2.0913 |
| 2.2193 | 29.69 | 14400 | 2.0772 |
| 2.2169 | 30.02 | 14560 | 2.0692 |
| 2.1848 | 30.35 | 14720 | 2.0411 |
| 2.1693 | 30.68 | 14880 | 2.0290 |
| 2.1964 | 31.01 | 15040 | 2.0169 |
| 2.1467 | 31.34 | 15200 | 2.0016 |
| 2.1352 | 31.67 | 15360 | 1.9880 |
| 2.1152 | 32.0 | 15520 | 1.9727 |
| 2.1098 | 32.33 | 15680 | 1.9604 |
| 2.0888 | 32.66 | 15840 | 1.9521 |
| 2.0837 | 32.99 | 16000 | 1.9394 |
| 2.0761 | 33.32 | 16160 | 1.9366 |
| 2.0635 | 33.65 | 16320 | 1.9200 |
| 2.0631 | 33.98 | 16480 | 1.9147 |
| 2.0448 | 34.31 | 16640 | 1.9053 |
| 2.0452 | 34.64 | 16800 | 1.8937 |
| 2.0303 | 34.97 | 16960 | 1.8801 |
| 2.0184 | 35.3 | 17120 | 1.8752 |
| 2.0115 | 35.63 | 17280 | 1.8667 |
| 2.0042 | 35.96 | 17440 | 1.8626 |
| 2.002 | 36.29 | 17600 | 1.8565 |
| 1.9918 | 36.62 | 17760 | 1.8475 |
| 1.9868 | 36.95 | 17920 | 1.8420 |
| 1.9796 | 37.28 | 18080 | 1.8376 |
| 1.976 | 37.61 | 18240 | 1.8318 |
| 1.9647 | 37.94 | 18400 | 1.8225 |
| 1.9561 | 38.27 | 18560 | 1.8202 |
| 1.9544 | 38.6 | 18720 | 1.8084 |
| 1.9454 | 38.93 | 18880 | 1.8057 |
| 1.9333 | 39.26 | 19040 | 1.8030 |
| 1.9411 | 39.59 | 19200 | 1.7966 |
| 1.9289 | 39.92 | 19360 | 1.7865 |
| 1.9261 | 40.25 | 19520 | 1.7815 |
| 1.9207 | 40.58 | 19680 | 1.7881 |
| 1.9164 | 40.91 | 19840 | 1.7747 |
| 1.9152 | 41.24 | 20000 | 1.7786 |
| 1.914 | 41.57 | 20160 | 1.7664 |
| 1.901 | 41.9 | 20320 | 1.7586 |
| 1.8965 | 42.23 | 20480 | 1.7554 |
| 1.8982 | 42.56 | 20640 | 1.7524 |
| 1.8941 | 42.89 | 20800 | 1.7460 |
| 1.8834 | 43.22 | 20960 | 1.7488 |
| 1.8841 | 43.55 | 21120 | 1.7486 |
| 1.8846 | 43.88 | 21280 | 1.7424 |
| 1.8763 | 44.21 | 21440 | 1.7352 |
| 1.8688 | 44.54 | 21600 | 1.7349 |
| 1.8714 | 44.87 | 21760 | 1.7263 |
| 1.8653 | 45.2 | 21920 | 1.7282 |
| 1.8673 | 45.53 | 22080 | 1.7195 |
| 1.8682 | 45.85 | 22240 | 1.7266 |
| 1.8532 | 46.19 | 22400 | 1.7180 |
| 1.8553 | 46.51 | 22560 | 1.7137 |
| 1.8569 | 46.84 | 22720 | 1.7158 |
| 1.8469 | 47.18 | 22880 | 1.7117 |
| 1.845 | 47.5 | 23040 | 1.7031 |
| 1.8475 | 47.83 | 23200 | 1.7089 |
| 1.845 | 48.16 | 23360 | 1.7018 |
| 1.8391 | 48.49 | 23520 | 1.6945 |
| 1.8456 | 48.82 | 23680 | 1.7015 |
| 1.8305 | 49.15 | 23840 | 1.6964 |
| 1.8324 | 49.48 | 24000 | 1.6900 |
| 1.7763 | 49.81 | 24160 | 1.6449 |
| 1.7728 | 50.14 | 24320 | 1.6436 |
| 1.7576 | 50.47 | 24480 | 1.6268 |
| 1.7354 | 50.8 | 24640 | 1.6088 |
| 1.74 | 51.13 | 24800 | 1.6156 |
| 1.7251 | 51.06 | 24960 | 1.6041 |
| 1.719 | 51.39 | 25120 | 1.5938 |
| 1.7257 | 52.12 | 25280 | 1.5983 |
| 1.7184 | 52.45 | 25440 | 1.5919 |
| 1.7093 | 52.78 | 25600 | 1.5848 |
| 1.7114 | 53.11 | 25760 | 1.5922 |
| 1.7076 | 53.44 | 25920 | 1.5843 |
| 1.7 | 53.77 | 26080 | 1.5807 |
| 1.7027 | 54.1 | 26240 | 1.5811 |
| 1.704 | 54.43 | 26400 | 1.5766 |
| 1.6958 | 54.76 | 26560 | 1.5756 |
| 1.6976 | 55.09 | 26720 | 1.5773 |
| 1.6944 | 55.42 | 26880 | 1.5725 |
| 1.6891 | 55.75 | 27040 | 1.5685 |
| 1.6936 | 56.08 | 27200 | 1.5750 |
| 1.6893 | 56.41 | 27360 | 1.5696 |
| 1.6886 | 56.74 | 27520 | 1.5643 |
| 1.6936 | 57.07 | 27680 | 1.5691 |
| 1.6883 | 57.4 | 27840 | 1.5718 |
| 1.6832 | 57.73 | 28000 | 1.5660 |
| 1.9222 | 28.03 | 28160 | 1.7107 |
| 1.7838 | 28.19 | 28320 | 1.6345 |
| 1.7843 | 28.36 | 28480 | 1.6445 |
| 1.7809 | 28.52 | 28640 | 1.6461 |
| 1.783 | 28.69 | 28800 | 1.6505 |
| 1.7869 | 28.85 | 28960 | 1.6364 |
| 1.778 | 29.02 | 29120 | 1.6363 |
| 1.775 | 29.18 | 29280 | 1.6364 |
| 1.7697 | 29.34 | 29440 | 1.6345 |
| 1.7719 | 29.51 | 29600 | 1.6261 |
| 1.7454 | 61.16 | 29760 | 1.6099 |
| 1.741 | 61.49 | 29920 | 1.6006 |
| 1.7314 | 62.02 | 30080 | 1.6041 |
| 1.7314 | 62.35 | 30240 | 1.5914 |
| 1.7246 | 62.68 | 30400 | 1.5917 |
| 1.7642 | 63.01 | 30560 | 1.5923 |
| 1.7221 | 63.34 | 30720 | 1.5857 |
| 1.7185 | 63.67 | 30880 | 1.5836 |
| 1.7022 | 64.0 | 31040 | 1.5836 |
| 1.7107 | 64.33 | 31200 | 1.5739 |
| 1.7082 | 64.66 | 31360 | 1.5724 |
| 1.7055 | 64.99 | 31520 | 1.5734 |
| 1.7019 | 65.32 | 31680 | 1.5707 |
| 1.699 | 65.65 | 31840 | 1.5649 |
| 1.6963 | 65.98 | 32000 | 1.5685 |
| 1.6935 | 66.31 | 32160 | 1.5673 |
| 1.6899 | 66.64 | 32320 | 1.5648 |
| 1.6869 | 66.97 | 32480 | 1.5620 |
| 1.6867 | 67.3 | 32640 | 1.5564 |
| 1.6861 | 67.63 | 32800 | 1.5552 |
| 1.6831 | 67.96 | 32960 | 1.5496 |
| 1.6778 | 68.29 | 33120 | 1.5479 |
| 1.6742 | 68.62 | 33280 | 1.5501 |
| 1.6737 | 68.95 | 33440 | 1.5441 |
| 1.6725 | 69.28 | 33600 | 1.5399 |
| 1.6683 | 69.61 | 33760 | 1.5398 |
| 1.6689 | 69.94 | 33920 | 1.5374 |
| 1.6634 | 70.27 | 34080 | 1.5385 |
| 1.6638 | 70.6 | 34240 | 1.5332 |
| 1.6614 | 70.93 | 34400 | 1.5329 |
| 1.6544 | 71.26 | 34560 | 1.5292 |
| 1.6532 | 71.59 | 34720 | 1.5268 |
| 1.6511 | 71.92 | 34880 | 1.5225 |
| 1.6506 | 72.25 | 35040 | 1.5219 |
| 1.6496 | 72.58 | 35200 | 1.5202 |
| 1.6468 | 72.91 | 35360 | 1.5199 |
| 1.6424 | 73.24 | 35520 | 1.5220 |
| 1.642 | 73.57 | 35680 | 1.5145 |
| 1.6415 | 73.9 | 35840 | 1.5139 |
| 1.6419 | 74.23 | 36000 | 1.5120 |
| 1.633 | 74.56 | 36160 | 1.5113 |
| 1.6354 | 74.89 | 36320 | 1.5139 |
| 1.6312 | 75.22 | 36480 | 1.5068 |
| 1.6298 | 75.55 | 36640 | 1.5056 |
| 1.6268 | 75.88 | 36800 | 1.5000 |
| 1.6277 | 76.21 | 36960 | 1.5033 |
| 1.6198 | 76.54 | 37120 | 1.4988 |
| 1.6246 | 76.87 | 37280 | 1.4978 |
| 1.6184 | 77.2 | 37440 | 1.4966 |
| 1.6187 | 77.53 | 37600 | 1.4954 |
| 1.6192 | 77.85 | 37760 | 1.4951 |
| 1.6134 | 78.19 | 37920 | 1.4936 |
| 1.6176 | 78.51 | 38080 | 1.4908 |
| 1.6103 | 78.84 | 38240 | 1.4904 |
| 1.612 | 79.18 | 38400 | 1.4919 |
| 1.611 | 79.5 | 38560 | 1.4891 |
| 1.6082 | 79.83 | 38720 | 1.4837 |
| 1.6047 | 80.16 | 38880 | 1.4859 |
| 1.6058 | 80.49 | 39040 | 1.4814 |
| 1.602 | 80.82 | 39200 | 1.4837 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DemangeJeremy/4-sentiments-with-flaubert | DemangeJeremy | 2022-10-21T13:46:12Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"flaubert",
"text-classification",
"sentiments",
"french",
"flaubert-large",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
language: fr
tags:
- sentiments
- text-classification
- flaubert
- french
- flaubert-large
---
# Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
### Comment l'utiliser ?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
loaded_tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_large_cased')
loaded_model = AutoModelForSequenceClassification.from_pretrained("DemangeJeremy/4-sentiments-with-flaubert")
nlp = pipeline('sentiment-analysis', model=loaded_model, tokenizer=loaded_tokenizer)
print(nlp("Je suis plutôt confiant."))
```
```
[{'label': 'OBJECTIVE', 'score': 0.3320835530757904}]
```
## Résultats de l'évaluation du modèle
| Epoch | Validation Loss | Samples Per Second |
|:------:|:--------------:|:------------------:|
| 1 | 2.219246 | 49.476000 |
| 2 | 1.883753 | 47.259000 |
| 3 | 1.747969 | 44.957000 |
| 4 | 1.695606 | 43.872000 |
| 5 | 1.641470 | 45.726000 |
## Citation
Pour toute utilisation de ce modèle, merci d'utiliser cette citation :
> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <https://huggingface.co/DemangeJeremy/4-sentiments-with-flaubert>
|
orkg/orkgnlp-predicates-clustering | orkg | 2022-10-21T13:40:57Z | 0 | 0 | null | [
"onnx",
"license:mit",
"region:us"
]
| null | 2022-05-09T08:02:12Z | ---
license: mit
---
This Repository includes the files required to run the `Predicates Clustering` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service.
The [Scikit-Learn](https://scikit-learn.org/stable/) models are converted using [skl2onnx](https://github.com/onnx/sklearn-onnx) and may not include all original scikit-learn functionalities. |
asi/albert-act-base | asi | 2022-10-21T13:26:29Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"albert_act",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1603.08983",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-11T20:33:26Z | ---
license: apache-2.0
language: en
datasets:
- wikipedia
- bookcorpus
model-index:
- name: asi/albert-act-base
results:
- task:
type: text-classification
name: CoLA
dataset:
type: glue
name: CoLA # General Language Understanding Evaluation benchmark (GLUE)
split: cola
metrics:
- type: matthews_correlation
value: 36.7
name: Matthew's Corr
- task:
type: text-classification
name: SST-2
dataset:
type: glue
name: SST-2 # The Stanford Sentiment Treebank
split: sst2
metrics:
- type: accuracy
value: 87.8
name: Accuracy
- task:
type: text-classification
name: MRPC
dataset:
type: glue
name: MRPC # Microsoft Research Paraphrase Corpus
split: mrpc
metrics:
- type: accuracy
value: 81.4
name: Accuracy
- type: f1
value: 86.5
name: F1
- task:
type: text-similarity
name: STS-B
dataset:
type: glue
name: STS-B # Semantic Textual Similarity Benchmark
split: stsb
metrics:
- type: spearmanr
value: 83.0
name: Spearman Corr
- type: pearsonr
value: 84.2
name: Pearson Corr
- task:
type: text-classification
name: QQP
dataset:
type: glue
name: QQP # Quora Question Pairs
split: qqp
metrics:
- type: f1
value: 68.5
name: F1
- type: accuracy
value: 87.7
name: Accuracy
- task:
type: text-classification
name: MNLI-m
dataset:
type: glue
name: MNLI-m # MultiNLI Matched
split: mnli_matched
metrics:
- type: accuracy
value: 79.9
name: Accuracy
- task:
type: text-classification
name: MNLI-mm
dataset:
type: glue
name: MNLI-mm # MultiNLI Matched
split: mnli_mismatched
metrics:
- type: accuracy
value: 79.2
name: Accuracy
- task:
type: text-classification
name: QNLI
dataset:
type: glue
name: QNLI # Question NLI
split: qnli
metrics:
- type: accuracy
value: 89.0
name: Accuracy
- task:
type: text-classification
name: RTE
dataset:
type: glue
name: RTE # Recognizing Textual Entailment
split: rte
metrics:
- type: accuracy
value: 63.0
name: Accuracy
- task:
type: text-classification
name: WNLI
dataset:
type: glue
name: WNLI # Winograd NLI
split: wnli
metrics:
- type: accuracy
value: 65.1
name: Accuracy
---
# Adaptive Depth Transformers
Implementation of the paper "How Many Layers and Why? An Analysis of the Model Depth in Transformers". In this study, we investigate the role of the multiple layers in deep transformer models. We design a variant of ALBERT that dynamically adapts the number of layers for each token of the input.
## Model architecture
We augment a multi-layer transformer encoder with a halting mechanism, which dynamically adjusts the number of layers for each token.
We directly adapted this mechanism from Graves ([2016](#graves-2016)). At each iteration, we compute a probability for each token to stop updating its state.
## Model use
The architecture is not yet directly included in the Transformers library. The code used for pre-training is available in the following [github repository](https://github.com/AntoineSimoulin/adaptive-depth-transformers). So you should install the code implementation first:
```bash
!pip install git+https://github.com/AntoineSimoulin/adaptive-depth-transformers$
```
Then you can use the model directly.
```python
from act import AlbertActConfig, AlbertActModel, TFAlbertActModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('asi/albert-act-base')
model = AlbertActModel.from_pretrained('asi/albert-act-base')
_ = model.eval()
inputs = tokenizer("a lump in the middle of the monkeys stirred and then fell quiet .", return_tensors="pt")
outputs = model(**inputs)
outputs.updates
# tensor([[[[15., 9., 10., 7., 3., 8., 5., 7., 12., 10., 6., 8., 8., 9., 5., 8.]]]])
```
## Citations
### BibTeX entry and citation info
If you use our iterative transformer model for your scientific publication or your industrial applications, please cite the following [paper](https://aclanthology.org/2021.acl-srw.23/):
```bibtex
@inproceedings{simoulin-crabbe-2021-many,
title = "How Many Layers and Why? {A}n Analysis of the Model Depth in Transformers",
author = "Simoulin, Antoine and
Crabb{\'e}, Benoit",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-srw.23",
doi = "10.18653/v1/2021.acl-srw.23",
pages = "221--228",
}
```
### References
><div id="graves-2016">Alex Graves. 2016. <a href="https://arxiv.org/abs/1603.08983" target="_blank">Adaptive computation time for recurrent neural networks.</a> CoRR, abs/1603.08983.</div>
|
huggingtweets/tszzl | huggingtweets | 2022-10-21T12:32:06Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/tszzl/1666355521581/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572784789291401216/1WrwslUF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">roon</div>
<div style="text-align: center; font-size: 14px;">@tszzl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from roon.
| Data | roon |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 779 |
| Short tweets | 375 |
| Tweets kept | 2053 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/nr9oggv1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tszzl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12g6sck7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12g6sck7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tszzl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Yinxing/ddpm-butterflies-128 | Yinxing | 2022-10-21T12:05:23Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-10-21T10:51:28Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Yinxing/ddpm-butterflies-128/tensorboard?#scalars)
|
ashish23993/t5-small-finetuned-xsum-a | ashish23993 | 2022-10-21T10:48:19Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-21T10:43:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum-a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-a
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 8 | 2.2554 | 21.1449 | 9.0713 | 17.7765 | 20.1134 | 19.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
hezzze/a2c-AntBulletEnv-v0 | hezzze | 2022-10-21T09:34:26Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-21T09:33:16Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1658.74 +/- 204.55
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asapcreditrepairusa/Credit-Repair-Houston | asapcreditrepairusa | 2022-10-21T09:33:48Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-10-21T09:33:11Z | ASAP Credit Repair has two critical missions, 1) to provide an effective and inexpensive option for credit repair and 2) to provide the best customer service experience along the way. We hope you choose [ASAP Credit Repair](https://asapcreditrepairusa.com) for your future credit repair needs. |
nicolarici/LawBERT-IT_trained | nicolarici | 2022-10-21T08:00:23Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"pretraining",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-21T07:45:05Z | **LawBERT-IT**
An Italian BERT model for the legal domain.
The code used for developing and training the model and the dataset used to extract the new words and continue the training of the BERT model are available on [GitHub](https://github.com/nicolarici/LawBERT-IT). |
teacookies/autotrain-21102022-cert-1827562840 | teacookies | 2022-10-21T07:41:52Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-21102022-cert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-21T07:29:56Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-21102022-cert
co2_eq_emissions:
emissions: 19.94429730071814
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1827562840
- CO2 Emissions (in grams): 19.9443
## Validation Metrics
- Loss: 0.028
- Accuracy: 0.992
- Precision: 0.820
- Recall: 0.885
- F1: 0.851
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-21102022-cert-1827562840
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-21102022-cert-1827562840", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-21102022-cert-1827562840", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Bugpie/dummy-model | Bugpie | 2022-10-21T07:04:44Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-19T11:49:58Z | ---
language: fr
license: mit
datasets:
- oscar
---
## Model description
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
## Evaluation
The model developers evaluated CamemBERT using four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI).
## Limitations and bias
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
This model was pretrinaed on a subcorpus of OSCAR multilingual corpus. Some of the limitations and risks associated with the OSCAR dataset, which are further detailed in the [OSCAR dataset card](https://huggingface.co/datasets/oscar), include the following:
> The quality of some OSCAR sub-corpora might be lower than expected, specifically for the lowest-resource languages.
> Constructed from Common Crawl, Personal and sensitive information might be present.
## Training data
OSCAR or Open Super-large Crawled Aggregated coRpus is a multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the Ungoliant architecture.
## How to use
-**Filling masks using pipeline**
```python
>>> from transformers import pipeline
>>> camembert_fill_mask = pipeline("fill-mask", model="camembert-base")
>>> results = camembert_fill_mask("Le camembert est <mask> :)")
>>> result
[{'score': 0.49091097712516785,
'token': 7200,
'token_str': 'délicieux',
'sequence': 'Le camembert est délicieux :)'},
{'score': 0.1055697426199913,
'token': 2183,
'token_str': 'excellent',
'sequence': 'Le camembert est excellent :)'},
{'score': 0.03453319892287254,
'token': 26202,
'token_str': 'succulent',
'sequence': 'Le camembert est succulent :)'},
{'score': 0.03303128108382225,
'token': 528,
'token_str': 'meilleur',
'sequence': 'Le camembert est meilleur :)'},
{'score': 0.030076386407017708,
'token': 1654,
'token_str': 'parfait',
'sequence': 'Le camembert est parfait :)'}]
```
-**Extract contextual embedding features from Camembert output**
```python
import torch
>>> tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
>>> encoded_sentence = tokenizer.encode(tokenized_sentence)
# Can be done in one step : tokenize.encode("J'aime le camembert !")
>>> tokenized_sentence
['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
>>> encoded_sentence
[5, 121, 11, 660, 16, 730, 25543, 110, 83, 6]
```

[more about](https://youtu.be/dMTy6C4UiQ4) |
nlp-waseda/roberta-large-japanese-with-auto-jumanpp | nlp-waseda | 2022-10-21T06:55:27Z | 1,733 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-15T05:40:40Z | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
mask_token: "[MASK]"
widget:
- text: "早稲田大学で自然言語処理を[MASK]する。"
---
# nlp-waseda/roberta-large-japanese-with-auto-jumanpp
## Model description
This is a Japanese RoBERTa large model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese-with-auto-jumanpp")
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese-with-auto-jumanpp")
sentence = '早稲田大学で自然言語処理を[MASK]する。'
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
`BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-large-japanese](https://huggingface.co/nlp-waseda/roberta-large-japanese).
Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
## Vocabulary
The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
## Training procedure
This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took two weeks using eight NVIDIA A100 GPUs.
The following hyperparameters were used during pretraining:
- learning_rate: 6e-5
- per_device_train_batch_size: 103
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 5
- total_train_batch_size: 4120
- max_seq_length: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-6
- lr_scheduler_type: linear
- training_steps: 670000
- warmup_steps: 10000
- mixed_precision_training: Native AMP
## Performance on JGLUE
See the [Baseline Scores](https://github.com/yahoojapan/JGLUE#baseline-scores) of JGLUE.
|
salascorp/distilroberta-base-mrpc-glue-oscar-salas2 | salascorp | 2022-10-21T06:40:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-21T06:36:59Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
model-index:
- name: distilroberta-base-mrpc-glue-oscar-salas2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-oscar-salas2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
NinedayWang/PolyCoder-0.4B | NinedayWang | 2022-10-21T06:03:41Z | 97 | 4 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"arxiv:2202.13169",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-20T09:45:17Z | This is a PolyCoder model with **0.4B** parameters,
presented in the paper ["A Systematic Evaluation of Large Language Models of Code"](https://arxiv.org/pdf/2202.13169.pdf) (MAPS'2022 and ICLR'2022 Workshop Deep Learning 4 Code).
The model was trained on **249 GB** of code across **12** programming languages.
**Note** - this model requires `transformers` version of at least **4.23.0**:
```
pip install transformers==4.23.0
```
For more information, see: [https://github.com/VHellendoorn/Code-LMs](https://github.com/VHellendoorn/Code-LMs)
If you use this model, please cite:
```
@inproceedings{
xu2022polycoder,
title={A Systematic Evaluation of Large Language Models of Code},
author={Frank F. Xu and Uri Alon and Graham Neubig and Vincent Josua Hellendoorn},
booktitle={Deep Learning for Code Workshop},
year={2022},
url={https://openreview.net/forum?id=SLcEnoObJZq}
}
``` |
NinedayWang/PolyCoder-2.7B | NinedayWang | 2022-10-21T06:03:23Z | 314 | 50 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"arxiv:2202.13169",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-20T09:47:34Z | This is a PolyCoder model with **2.7B** parameters,
presented in the paper ["A Systematic Evaluation of Large Language Models of Code"](https://arxiv.org/pdf/2202.13169.pdf) (MAPS'2022 and ICLR'2022 Workshop Deep Learning 4 Code).
The model was trained on **249 GB** of code across **12** programming languages.
**Note** - this model requires `transformers` version of at least **4.23.0**:
```
pip install transformers==4.23.0
```
For more information, see: [https://github.com/VHellendoorn/Code-LMs](https://github.com/VHellendoorn/Code-LMs)
If you use this model, please cite:
```
@inproceedings{
xu2022polycoder,
title={A Systematic Evaluation of Large Language Models of Code},
author={Frank F. Xu and Uri Alon and Graham Neubig and Vincent Josua Hellendoorn},
booktitle={Deep Learning for Code Workshop},
year={2022},
url={https://openreview.net/forum?id=SLcEnoObJZq}
}
``` |
jo-kwsm/distilbert-base-uncased-finetuned-emotion | jo-kwsm | 2022-10-21T06:02:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-21T03:31:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9253582087556043
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2244
- Accuracy: 0.9255
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8602 | 1.0 | 250 | 0.3344 | 0.901 | 0.8979 |
| 0.263 | 2.0 | 500 | 0.2244 | 0.9255 | 0.9254 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
stanford-crfm/levanter-gpt | stanford-crfm | 2022-10-21T05:33:26Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-03T03:16:18Z | ---
pipeline_tag: text-generation
widget:
text: You could not prevent a thunderstorm, but you could use
---
Levanter GPT is trained on OpenWebText2.
More complete model card will be made in the future. |
api19750904/VM-Fast_Check | api19750904 | 2022-10-21T04:30:55Z | 72 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-21T04:30:42Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: VM-Fast_Check
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9101123809814453
---
# VM-Fast_Check
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### person drinking

#### person smoking

#### swimsuit boy

#### swimsuit girl
 |
edbeeching/atari_2B_atari_yarsrevenge_2222 | edbeeching | 2022-10-21T04:26:27Z | 3 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-21T04:25:25Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_yarsrevenge
type: atari_yarsrevenge
metrics:
- type: mean_reward
value: 336431.19 +/- 148269.98
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_yarsrevenge** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_wizardofwor_2222 | edbeeching | 2022-10-21T04:21:31Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-21T04:20:36Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_wizardofwor
type: atari_wizardofwor
metrics:
- type: mean_reward
value: 61420.00 +/- 23105.79
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_wizardofwor** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
huggingtweets/elonmusk-mar15sa-sergiorocks | huggingtweets | 2022-10-21T04:07:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-21T04:06:32Z | ---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-mar15sa-sergiorocks/1666325239514/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1580062742693699584/RJ5EI7PS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1142324885550751744/wVNatx7J_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/566329118489194496/f_ALTi7v_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Sergio Pereira 🚀 & Marissa Goldberg</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-mar15sa-sergiorocks</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Sergio Pereira 🚀 & Marissa Goldberg.
| Data | Elon Musk | Sergio Pereira 🚀 | Marissa Goldberg |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 3250 | 3248 |
| Retweets | 133 | 18 | 301 |
| Short tweets | 949 | 54 | 110 |
| Tweets kept | 2118 | 3178 | 2837 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ahul38aq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-mar15sa-sergiorocks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1r3916r2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1r3916r2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-mar15sa-sergiorocks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
edbeeching/atari_2B_atari_timepilot_2222 | edbeeching | 2022-10-21T03:38:54Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-21T03:37:51Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_timepilot
type: atari_timepilot
metrics:
- type: mean_reward
value: 88855.00 +/- 25100.17
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_timepilot** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_tennis_2222 | edbeeching | 2022-10-21T03:32:46Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-21T03:31:38Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_tennis
type: atari_tennis
metrics:
- type: mean_reward
value: 23.00 +/- 1.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_tennis** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
huggingtweets/levelsio | huggingtweets | 2022-10-21T03:28:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-21T03:27:24Z | ---
language: en
thumbnail: http://www.huggingtweets.com/levelsio/1666322920443/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1562107516066095106/IUccJ78Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">@levelsio</div>
<div style="text-align: center; font-size: 14px;">@levelsio</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from @levelsio.
| Data | @levelsio |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 173 |
| Short tweets | 535 |
| Tweets kept | 2535 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tof4zha8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @levelsio's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lcpeawur) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lcpeawur/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/levelsio')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Adipta/setfit-model-test-2 | Adipta | 2022-10-21T02:39:05Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-21T02:38:54Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
xxxxxxxxxxxxxxxxxxxxxx/model-y | xxxxxxxxxxxxxxxxxxxxxx | 2022-10-21T01:49:43Z | 0 | 0 | null | [
"license:wtfpl",
"region:us"
]
| null | 2022-10-17T07:21:03Z | ---
license: wtfpl
---
# wwww
```typescript
import React, { CSSProperties, PropsWithRef } from 'react';
import MarkdownPreview, { MarkdownPreviewProps } from '@uiw/react-markdown-preview';
import { ITextAreaProps } from './components/TextArea';
import { ICommand } from './commands';
import { ContextStore, PreviewType } from './Context';
import './index.less';
export interface IProps {
prefixCls?: string;
className?: string;
}
export interface MDEditorProps extends Omit<React.HTMLAttributes<HTMLDivElement>, 'onChange'>, IProps {
/**
* The Markdown value.
*/
value?: string;
/**
* Event handler for the `onChange` event.
*/
onChange?: (value?: string, event?: React.ChangeEvent<HTMLTextAreaElement>, state?: ContextStore) => void;
/**
* editor height change listener
*/
onHeightChange?: (value?: CSSProperties['height'], oldValue?: CSSProperties['height'], state?: ContextStore) => void;
/**
* Can be used to make `Markdown Editor` focus itself on initialization. Defaults to on.
* it will be set to true when either the source `textarea` is focused,
* or it has an `autofocus` attribute and no other element is focused.
*/
autoFocus?: ITextAreaProps['autoFocus'];
/**
* The height of the editor.
* ⚠️ `Dragbar` is invalid when **`height`** parameter percentage.
*/
height?: CSSProperties['height'];
/**
* Custom toolbar heigth
* @default 29px
*
* @deprecated toolbar height adaptive: https://github.com/uiwjs/react-md-editor/issues/427
*
*/
toolbarHeight?: number;
/**
* Show drag and drop tool. Set the height of the editor.
*/
visibleDragbar?: boolean;
/**
* @deprecated use `visibleDragbar`
*/
visiableDragbar?: boolean;
/**
* Show markdown preview.
*/
preview?: PreviewType;
/**
* Full screen display editor.
*/
fullscreen?: boolean;
/**
* Disable `fullscreen` setting body styles
*/
overflow?: boolean;
/**
* Maximum drag height. `visibleDragbar=true`
*/
maxHeight?: number;
/**
* Minimum drag height. `visibleDragbar=true`
*/
minHeight?: number;
/**
* This is reset [react-markdown](https://github.com/rexxars/react-markdown) settings.
*/
previewOptions?: Omit<MarkdownPreviewProps, 'source'>;
/**
* Set the `textarea` related props.
*/
textareaProps?: ITextAreaProps;
/**
* Use div to replace TextArea or re-render TextArea
* @deprecated Please use ~~`renderTextarea`~~ -> `components`
*/
renderTextarea?: ITextAreaProps['renderTextarea'];
/**
* re-render element
*/
components?: {
/** Use div to replace TextArea or re-render TextArea */
textarea?: ITextAreaProps['renderTextarea'];
/**
* Override the default command element
* _`toolbar`_ < _`command[].render`_
*/
toolbar?: ICommand['render'];
/** Custom markdown preview */
preview?: (source: string, state: ContextStore, dispath: React.Dispatch<ContextStore>) => JSX.Element;
};
/**
* Disable editing area code highlighting. The value is `false`, which increases the editing speed.
* @default true
*/
highlightEnable?: boolean;
/**
* The number of characters to insert when pressing tab key.
* Default `2` spaces.
*/
tabSize?: number;
/**
* If `false`, the `tab` key inserts a tab character into the textarea. If `true`, the `tab` key executes default behavior e.g. focus shifts to next element.
*/
defaultTabEnable?: boolean;
/**
* You can create your own commands or reuse existing commands.
*/
commands?: ICommand[];
/**
* Filter or modify your commands.
* https://github.com/uiwjs/react-md-editor/issues/296
*/
commandsFilter?: (command: ICommand, isExtra: boolean) => false | ICommand;
/**
* You can create your own commands or reuse existing commands.
*/
extraCommands?: ICommand[];
/**
* Hide the tool bar
*/
hideToolbar?: boolean;
/** Whether to enable scrolling */
enableScroll?: boolean;
/** Toolbar on bottom */
toolbarBottom?: boolean;
}
declare type Editor = React.FC<PropsWithRef<MDEditorProps>> & {
Markdown: typeof MarkdownPreview;
};
declare const mdEditor: Editor;
export default mdEditor;
```
## asdjk
### lskjdflskj
as
d
s
d
|
edbeeching/atari_2B_atari_yarsrevenge_1111 | edbeeching | 2022-10-21T00:01:51Z | 7 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-21T00:00:47Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_yarsrevenge
type: atari_yarsrevenge
metrics:
- type: mean_reward
value: 224390.75 +/- 197367.31
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_yarsrevenge** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_videopinball_1111 | edbeeching | 2022-10-20T23:54:10Z | 6 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-20T23:52:57Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_videopinball
type: atari_videopinball
metrics:
- type: mean_reward
value: 372372.91 +/- 274249.66
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_videopinball** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
salascorp/distilroberta-base-mrpc-glue-oscar-salas | salascorp | 2022-10-20T22:48:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-20T01:44:30Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilroberta-base-mrpc-glue-oscar-salas
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-oscar-salas
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6456
- eval_accuracy: 0.8260
- eval_f1: 0.8795
- eval_runtime: 30.3289
- eval_samples_per_second: 13.453
- eval_steps_per_second: 1.682
- epoch: 1.09
- step: 500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/jiswooning-the3ammusician | huggingtweets | 2022-10-20T22:27:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-20T22:26:00Z | ---
language: en
thumbnail: http://www.huggingtweets.com/jiswooning-the3ammusician/1666304830215/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1560736534143422465/3oAu6oCD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521185553382883334/fHjvh84L_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">TOR Kate & K8 misses KARD</div>
<div style="text-align: center; font-size: 14px;">@jiswooning-the3ammusician</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from TOR Kate & K8 misses KARD.
| Data | TOR Kate | K8 misses KARD |
| --- | --- | --- |
| Tweets downloaded | 3234 | 3193 |
| Retweets | 1038 | 1194 |
| Short tweets | 310 | 208 |
| Tweets kept | 1886 | 1791 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vcg0753/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jiswooning-the3ammusician's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1plbf2ii) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1plbf2ii/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jiswooning-the3ammusician')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jayanta/cvt-13-384-22k-fv-finetuned-memes | jayanta | 2022-10-20T22:05:58Z | 42 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"cvt",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-20T21:40:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: cvt-13-384-22k-fv-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8315301391035549
- name: Precision
type: precision
value: 0.8302128280229624
- name: Recall
type: recall
value: 0.8315301391035549
- name: F1
type: f1
value: 0.8292026505769348
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cvt-13-384-22k-fv-finetuned-memes
This model is a fine-tuned version of [microsoft/cvt-13-384-22k](https://huggingface.co/microsoft/cvt-13-384-22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5761
- Accuracy: 0.8315
- Precision: 0.8302
- Recall: 0.8315
- F1: 0.8292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.3821 | 0.99 | 20 | 1.2780 | 0.4969 | 0.5083 | 0.4969 | 0.4458 |
| 1.0785 | 1.99 | 40 | 0.8633 | 0.6669 | 0.6658 | 0.6669 | 0.6500 |
| 0.8862 | 2.99 | 60 | 0.7110 | 0.7218 | 0.7258 | 0.7218 | 0.7013 |
| 0.665 | 3.99 | 80 | 0.5515 | 0.8045 | 0.8137 | 0.8045 | 0.8050 |
| 0.6056 | 4.99 | 100 | 0.5956 | 0.7960 | 0.8041 | 0.7960 | 0.7846 |
| 0.4779 | 5.99 | 120 | 0.6229 | 0.7937 | 0.7945 | 0.7937 | 0.7857 |
| 0.4554 | 6.99 | 140 | 0.5355 | 0.8099 | 0.8126 | 0.8099 | 0.8086 |
| 0.4249 | 7.99 | 160 | 0.5447 | 0.8269 | 0.8275 | 0.8269 | 0.8236 |
| 0.4313 | 8.99 | 180 | 0.5530 | 0.8153 | 0.8140 | 0.8153 | 0.8132 |
| 0.423 | 9.99 | 200 | 0.5346 | 0.8238 | 0.8230 | 0.8238 | 0.8223 |
| 0.3997 | 10.99 | 220 | 0.5413 | 0.8338 | 0.8347 | 0.8338 | 0.8338 |
| 0.4095 | 11.99 | 240 | 0.5999 | 0.8207 | 0.8231 | 0.8207 | 0.8177 |
| 0.3979 | 12.99 | 260 | 0.5632 | 0.8284 | 0.8255 | 0.8284 | 0.8250 |
| 0.3408 | 13.99 | 280 | 0.5725 | 0.8207 | 0.8198 | 0.8207 | 0.8196 |
| 0.3828 | 14.99 | 300 | 0.5631 | 0.8277 | 0.8258 | 0.8277 | 0.8260 |
| 0.3595 | 15.99 | 320 | 0.6005 | 0.8308 | 0.8297 | 0.8308 | 0.8275 |
| 0.3789 | 16.99 | 340 | 0.5840 | 0.8300 | 0.8271 | 0.8300 | 0.8273 |
| 0.3545 | 17.99 | 360 | 0.5983 | 0.8246 | 0.8226 | 0.8246 | 0.8222 |
| 0.3472 | 18.99 | 380 | 0.5795 | 0.8416 | 0.8382 | 0.8416 | 0.8390 |
| 0.355 | 19.99 | 400 | 0.5761 | 0.8315 | 0.8302 | 0.8315 | 0.8292 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
imodels/gpt-neo-2.7B-titles | imodels | 2022-10-20T21:17:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-17T18:36:43Z | ---
license: apache-2.0
widget:
- text: "2021\n\n"
---
Full code and details at https://github.com/csinva/gpt-paper-title-generator
**Model**
- finetunes starting from the[gpt-neo-2.7B checkpoint](https://huggingface.co/EleutherAI/gpt-neo-2.7B)
- for training details see [the training script](https://github.com/csinva/gpt-paper-title-generator/blob/0157f26be9b0763b4ea6480e5b149fdb8dff4626/gptneo/02_finetune_hf.py)
- inference
- should prepend with a year and two newlines before querying for a title, e.g. `2022\n\n`
```python
from transformers import AutoModelForCausalLM, pipeline, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("csinva/gpt-neo-2.7B-titles")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
pipe('2022\n\n')
```
**Data**
- all [papers on arXiv](https://www.kaggle.com/datasets/Cornell-University/arxiv) in the categories cs.AI, cs.LG, stat.ML
- date cutoff: only finetuned on papers with dat on or before Apr 1, 2022
- random 5% of papers also excluded
- this results in 98,388 papers for finetuning
- during finetuning each paper title was given starting with the prompt `<year>\n\n <title>\n` (e.g. `2022\n\n Emb-GAM: an Interpretable and Efficient Predictor using Pre-trained Language Models\n`) |
jayanta/swin-large-patch4-window7-224-fv-finetuned-memes | jayanta | 2022-10-20T21:16:39Z | 64 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-20T19:49:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: swin-large-patch4-window7-224-fv-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8601236476043277
- name: Precision
type: precision
value: 0.8582306285016578
- name: Recall
type: recall
value: 0.8601236476043277
- name: F1
type: f1
value: 0.8582797853944862
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-large-patch4-window7-224-fv-finetuned-memes
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6502
- Accuracy: 0.8601
- Precision: 0.8582
- Recall: 0.8601
- F1: 0.8583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.2077 | 0.99 | 20 | 0.9499 | 0.6461 | 0.6764 | 0.6461 | 0.5863 |
| 0.5687 | 1.99 | 40 | 0.5365 | 0.7975 | 0.8018 | 0.7975 | 0.7924 |
| 0.3607 | 2.99 | 60 | 0.4007 | 0.8423 | 0.8419 | 0.8423 | 0.8398 |
| 0.203 | 3.99 | 80 | 0.3751 | 0.8509 | 0.8502 | 0.8509 | 0.8503 |
| 0.1728 | 4.99 | 100 | 0.4168 | 0.8509 | 0.8519 | 0.8509 | 0.8506 |
| 0.0963 | 5.99 | 120 | 0.4351 | 0.8586 | 0.8573 | 0.8586 | 0.8555 |
| 0.0956 | 6.99 | 140 | 0.4415 | 0.8547 | 0.8542 | 0.8547 | 0.8541 |
| 0.079 | 7.99 | 160 | 0.5312 | 0.8501 | 0.8475 | 0.8501 | 0.8459 |
| 0.0635 | 8.99 | 180 | 0.5376 | 0.8601 | 0.8578 | 0.8601 | 0.8577 |
| 0.0593 | 9.99 | 200 | 0.5060 | 0.8609 | 0.8615 | 0.8609 | 0.8604 |
| 0.0656 | 10.99 | 220 | 0.4997 | 0.8617 | 0.8573 | 0.8617 | 0.8587 |
| 0.0561 | 11.99 | 240 | 0.5430 | 0.8586 | 0.8604 | 0.8586 | 0.8589 |
| 0.0523 | 12.99 | 260 | 0.5354 | 0.8624 | 0.8643 | 0.8624 | 0.8626 |
| 0.0489 | 13.99 | 280 | 0.5539 | 0.8609 | 0.8572 | 0.8609 | 0.8577 |
| 0.0487 | 14.99 | 300 | 0.5785 | 0.8609 | 0.8591 | 0.8609 | 0.8591 |
| 0.0485 | 15.99 | 320 | 0.6186 | 0.8601 | 0.8578 | 0.8601 | 0.8573 |
| 0.0518 | 16.99 | 340 | 0.6342 | 0.8624 | 0.8612 | 0.8624 | 0.8606 |
| 0.0432 | 17.99 | 360 | 0.6302 | 0.8586 | 0.8598 | 0.8586 | 0.8580 |
| 0.0469 | 18.99 | 380 | 0.6323 | 0.8617 | 0.8606 | 0.8617 | 0.8604 |
| 0.0426 | 19.99 | 400 | 0.6502 | 0.8601 | 0.8582 | 0.8601 | 0.8583 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
creditgrossepointe/creditgrossepointe | creditgrossepointe | 2022-10-20T21:13:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-10-20T21:12:54Z | We are a family-owned and operated Credit Repair company, founded in 2013. Our goal is to help you achieve financial success and reach your credit goals.
Follow this [link](https://grossepointepark.asapcreditrepairusa.com/) |
yuik/ppo-LunarLander-v2 | yuik | 2022-10-20T21:09:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-20T21:08:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.74 +/- 20.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jinhybr/layoutlm-funsd-tf | jinhybr | 2022-10-20T20:48:26Z | 10 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-20T20:10:28Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2509
- Validation Loss: 0.6942
- Train Overall Precision: 0.7291
- Train Overall Recall: 0.7888
- Train Overall F1: 0.7578
- Train Overall Accuracy: 0.8067
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.6886 | 1.4100 | 0.2324 | 0.2313 | 0.2318 | 0.5009 | 0 |
| 1.1702 | 0.8486 | 0.5971 | 0.6618 | 0.6278 | 0.7338 | 1 |
| 0.7521 | 0.7032 | 0.6561 | 0.7341 | 0.6929 | 0.7687 | 2 |
| 0.5727 | 0.6268 | 0.6736 | 0.7662 | 0.7169 | 0.7957 | 3 |
| 0.4586 | 0.6322 | 0.6909 | 0.7772 | 0.7315 | 0.7999 | 4 |
| 0.3725 | 0.6378 | 0.7134 | 0.7782 | 0.7444 | 0.8096 | 5 |
| 0.2987 | 0.6835 | 0.7270 | 0.7777 | 0.7515 | 0.8056 | 6 |
| 0.2509 | 0.6942 | 0.7291 | 0.7888 | 0.7578 | 0.8067 | 7 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.6.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
asapcreditcolumbus/asapcreditrepaircolumbus | asapcreditcolumbus | 2022-10-20T20:36:15Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-10-20T20:35:14Z | Are you looking for [credit repair in Columbus](https://columbus.asapcreditrepairusa.com/)? You are at the right place.
We’re not your average credit repair firm, we truly care, so we only charge for the items we pursue on your report. Not only does this make us one of the FASTEST credit restoration companies, but we’re also one of the most affordable. |
Shaier/longformer_quail | Shaier | 2022-10-20T19:58:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"longformer",
"multiple-choice",
"generated_from_trainer",
"dataset:quail",
"endpoints_compatible",
"region:us"
]
| multiple-choice | 2022-10-20T15:42:17Z | ---
tags:
- generated_from_trainer
datasets:
- quail
model-index:
- name: longformer_quail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer_quail
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the quail dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.9568
- eval_accuracy: 0.5791
- eval_runtime: 44.254
- eval_samples_per_second: 12.564
- eval_steps_per_second: 6.282
- epoch: 4.0
- step: 816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 25
- total_train_batch_size: 50
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
|
allenai/drug_combinations_lm_pubmedbert | allenai | 2022-10-20T18:25:13Z | 39 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"biomedical",
"bioNLP",
"en",
"arxiv:2205.02289",
"arxiv:2007.15779",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-19T11:25:49Z | ---
language:
- en
tags:
- biomedical
- bioNLP
---
This is a version of [PubmedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext?text=%5BMASK%5D+is+a+tumor+suppressor+gene.) which has been domain-adapted (via additional pretraining) to a set of PubMed abstracts that likely discuss multiple-drug therapies. This model was the strongest contextualized encoder in the experiments in the paper ["A Dataset for N-ary Relation Extraction of Drug Combinations"](https://arxiv.org/abs/2205.02289), when used as a component of a larger relation classification model (also hosted [here on Huggingface](https://huggingface.co/allenai/drug-combo-classifier-pubmedbert-dapt)).
If you use this model, cite both
```latex
@misc{pubmedbert,
author = {Yu Gu and Robert Tinn and Hao Cheng and Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann and Jianfeng Gao and Hoifung Poon},
title = {Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing},
year = {2020},
eprint = {arXiv:2007.15779},
}
```
and
```latex
@inproceedings{Tiktinsky2022ADF,
title = "A Dataset for N-ary Relation Extraction of Drug Combinations",
author = "Tiktinsky, Aryeh and Viswanathan, Vijay and Niezni, Danna and Meron Azagury, Dana and Shamay, Yosi and Taub-Tabib, Hillel and Hope, Tom and Goldberg, Yoav",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.233",
doi = "10.18653/v1/2022.naacl-main.233",
pages = "3190--3203",
}
``` |
allenai/drug-combo-classifier-pubmedbert-dapt | allenai | 2022-10-20T18:23:30Z | 23 | 5 | transformers | [
"transformers",
"pytorch",
"bert",
"en",
"arxiv:2205.02289",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-04T03:20:11Z | ---
language: en
license: mit
---
This is the baseline model used in most experiments in the paper ["A Dataset for N-ary Relation Extraction of Drug Combinations"](https://arxiv.org/abs/2205.02289).
*(for just the domain-adapted masked language model that we use underneath this model, see [here](https://huggingface.co/allenai/drug_combinations_lm_pubmedbert?text=Paxlovid+works+well+in+combination+with+%5BMASK%5D+for+treating+breast+cancer.))*
**Steps to load this model**
1) Download accompanying code:
```
git clone https://github.com/allenai/drug-combo-extraction.git
conda create --name drug_combo python=3.8.5
conda activate drug_combo
```
2) Download model from Huggingface:
```
git lfs install
git clone https://huggingface.co/allenai/drug-combo-classifier-pubmedbert-dapt
```
3) Load model (`in Python`):
```
from modeling.model import load_model
checkpoint_path = "drug-combo-classifier-pubmedbert-dapt"
model, tokenizer, metadata = load_model(checkpoint_path)
``` |
jxm/u-PMLM-R | jxm | 2022-10-20T18:05:26Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2004.11579",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-06-01T16:08:29Z | PMLM is the language model described in [Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order](https://arxiv.org/abs/2004.11579), which is trained with probabilistic masking. This is the "PMLM-R" variant, adapted from [the authors' original implementation](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/PMLM). |
jxm/u-PMLM-A | jxm | 2022-10-20T18:05:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2004.11579",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-06-01T17:37:45Z | PMLM is the language model described in [Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order](https://arxiv.org/abs/2004.11579), which is trained with probabilistic masking. This is the "PMLM-A" variant, adapted from [the authors' original implementation](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/PMLM). |
mprzibilla/super_large_finetune_M01 | mprzibilla | 2022-10-20T17:56:53Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-19T12:05:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: super_large_finetune_M01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# super_large_finetune_M01
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9906
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 35440
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:---:|
| 10.0626 | 20.0 | 70880 | 3.0307 | 1.0 |
| 2.5319 | 40.0 | 141760 | 3.0316 | 1.0 |
| 2.4978 | 60.0 | 212640 | 3.0123 | 1.0 |
| 2.4849 | 80.0 | 283520 | 2.9923 | 1.0 |
| 2.4776 | 100.0 | 354400 | 3.0092 | 1.0 |
| 2.4733 | 120.0 | 425280 | 2.9964 | 1.0 |
| 2.4702 | 140.0 | 496160 | 2.9968 | 1.0 |
| 2.4686 | 160.0 | 567040 | 2.9937 | 1.0 |
| 2.4669 | 180.0 | 637920 | 2.9908 | 1.0 |
| 2.4661 | 200.0 | 708800 | 2.9906 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rbroc/contrastive-user-encoder-singlepost | rbroc | 2022-10-20T16:56:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-10-19T08:56:38Z | ---
language:
- en
license: apache-2.0
library_name: transformers
---
### Contrastive user encoder (single post)
This model is a `DistilBertModel` trained by fine-tuning `distilbert-base-uncased` on author-based triplet loss.
#### Details
Training and evaluation details are provided in our EMNLP Findings paper:
- Rocca, R., & Yarkoni, T. (2022), Language as a fingerprint: Self-supervised learning of user encodings using transformers, to appear in *Findings of the Association for Computational Linguistics: EMNLP 2022*
#### Training
We fine-tuned DistilBERT on triplets consisting of:
- a single Reddit submission from a given user (the "anchor") - see ```rbroc/contrastive-user-encoder-multipost``` for a model trained on aggregated embeddings of multiple anchors;
- an additional post from the same user (a "positive example");
- a post from a different, randomly selected user (the "negative example")
To compute the loss, we use [CLS] encoding of the anchor, positive example and negative example from the last layer of the DistilBERT encoder. We optimize for \\(max(||f(a) - f(n)|| - ||f(a) - f(p)|| + \alpha,0)\\)
where:
- \\( f(a)\\) is the [CLS] encoding of the anchor;
- \\( f(n) \\) is the [CLS] encoding of the negative example;
- \\( f(p) \\) is the [CLS] encoding of the positive example;
- \\( \alpha \\) is a tunable parameter called margin. Here, we tuned this to \\( \alpha = 1.0\\)
#### Evaluation and usage
The model yields performance advantages downstream user-based classification tasks.
We encourage usage and benchmarking on tasks involving:
- prediction of user traits (e.g., personality);
- extraction of user-aware text encodings (e.g., style modeling);
- contextualized text modeling, where standard text representations are complemented with compact user representations
#### Limitations
Being exclusively trained on Reddit data, our models probably overfit to linguistic markers and traits which are relevant to characterizing the Reddit user population, but less salient in the general population. Domain-specific fine-tuning may be required before deployment.
Furthermore, our self-supervised approach enforces little or no control over biases, which models may actively use as part of their heuristics in contrastive and downstream tasks. |
tringuyexn/ppo-LunarLander-v2 | tringuyexn | 2022-10-20T16:55:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-20T16:55:27Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 237.09 +/- 23.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
north/t5_base_scand3M | north | 2022-10-20T16:16:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"da",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-13T09:02:03Z | ---
language:
- no
- nn
- sv
- da
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: other
---
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
|North-T5‑NCC‑modern|[🤗](https://huggingface.co/north/t5_small_NCC_modern)|[🤗](https://huggingface.co/north/t5_base_NCC_modern)|[🤗](https://huggingface.co/north/t5_large_NCC_modern)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern)||
|North-T5‑NCC‑modern‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern_lm)||
|North-T5‑NCC‑scand|[🤗](https://huggingface.co/north/t5_small_NCC_scand)|[🤗](https://huggingface.co/north/t5_base_NCC_scand)|[🤗](https://huggingface.co/north/t5_large_NCC_scand)|[🤗](https://huggingface.co/north/t5_xl_NCC_scand)||
|North-T5‑scand|[🤗](https://huggingface.co/north/t5_small_scand)|[🤗](https://huggingface.co/north/t5_base_scand)|[🤗](https://huggingface.co/north/t5_large_scand)||
|North-byT5‑NCC|[🤗](https://huggingface.co/north/byt5_small_NCC)|[🤗](https://huggingface.co/north/byt5_base_NCC)|[🤗](https://huggingface.co/north/byt5_large_NCC)||
|North-T5‑scand3M|✔|[🤗](https://huggingface.co/north/t5_large_scand3M)|[🤗](https://huggingface.co/north/t5_xl_scand3M)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/base/scandinavian3k_t5x_base/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
|**North‑T5‑NCC‑modern**| The model is pretrained for an additional 200k steps on a blanaced Bokmål and Nynorsk corpus. While this was originally done for doing translation between Bokmål and Nynorsk, it might also give improved results on tasks where you know that the input/output is modern "standard" text. A significant part of the training corpus is newspapers and reports.|
|**North‑T5‑NCC‑modern‑lm**| Trained as above but with an additional 100k "language model"-pretraining.|
|**North‑T5‑NCC‑scand**|The model is pretrained for an additional 200k steps on a Scandinavian corpus (Bokmål, Nynorsk, Danish, Swedish and Icelandic (+ a tiny bit Faeroyish)). The model was trained for increasing the understanding of what effect such training has on various languages.|
|**North‑T5‑scand**|Pretrained for 1,700,000 steps starting with the mT5 checkpoing. The purpose of the mode is studying the difference of different training regimes for Scandinavian language model.|
|**North‑byT5‑base**| This is a vocabulary free version of T5. It is trained exactly like North-T5, but instead of the 250,112 vocabulary, this model operates directly on the raw text. The model architecture might be of particulary interest for tasks involving for instance spelling correction, OCR-cleaning, handwriting recognition etc. However, it will - by design - have amuch shorter maximum sequence length.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
amanneo/mail-generator-mini-v2 | amanneo | 2022-10-20T14:49:33Z | 3 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-20T13:12:41Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: amanneo/mail-generator-mini-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amanneo/mail-generator-mini-v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5212
- Train Accuracy: 0.0027
- Validation Loss: 5.5781
- Validation Accuracy: 0.0
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -994, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 2.5928 | 0.0171 | 5.5430 | 0.0048 | 0 |
| 2.6003 | 0.0207 | 5.5430 | 0.0048 | 1 |
| 2.5954 | 0.0171 | 5.5508 | 0.0048 | 2 |
| 2.5775 | 0.0190 | 5.5508 | 0.0024 | 3 |
| 2.5758 | 0.0231 | 5.5508 | 0.0024 | 4 |
| 2.5742 | 0.0207 | 5.5586 | 0.0048 | 5 |
| 2.5547 | 0.0209 | 5.5586 | 0.0048 | 6 |
| 2.5566 | 0.0188 | 5.5586 | 0.0048 | 7 |
| 2.5391 | 0.0193 | 5.5586 | 0.0048 | 8 |
| 2.5378 | 0.0215 | 5.5508 | 0.0048 | 9 |
| 2.5238 | 0.0188 | 5.5469 | 0.0048 | 10 |
| 2.5150 | 0.0160 | 5.5508 | 0.0048 | 11 |
| 2.4967 | 0.0174 | 5.5508 | 0.0071 | 12 |
| 2.4691 | 0.0193 | 5.5430 | 0.0071 | 13 |
| 2.4626 | 0.0163 | 5.5430 | 0.0071 | 14 |
| 2.4417 | 0.0231 | 5.5352 | 0.0048 | 15 |
| 2.4323 | 0.0215 | 5.5352 | 0.0048 | 16 |
| 2.4193 | 0.0226 | 5.5469 | 0.0048 | 17 |
| 2.4170 | 0.0185 | 5.5469 | 0.0048 | 18 |
| 2.3743 | 0.0193 | 5.5312 | 0.0048 | 19 |
| 2.3730 | 0.0207 | 5.5312 | 0.0048 | 20 |
| 2.3535 | 0.0198 | 5.5312 | 0.0048 | 21 |
| 2.3372 | 0.0182 | 5.5312 | 0.0071 | 22 |
| 2.3324 | 0.0177 | 5.5312 | 0.0048 | 23 |
| 2.3011 | 0.0204 | 5.5195 | 0.0048 | 24 |
| 2.2650 | 0.0212 | 5.5117 | 0.0048 | 25 |
| 2.2568 | 0.0198 | 5.5078 | 0.0048 | 26 |
| 2.2331 | 0.0196 | 5.5156 | 0.0048 | 27 |
| 2.2021 | 0.0193 | 5.5078 | 0.0048 | 28 |
| 2.1807 | 0.0204 | 5.5039 | 0.0048 | 29 |
| 2.1691 | 0.0190 | 5.5 | 0.0 | 30 |
| 2.1463 | 0.0174 | 5.4766 | 0.0 | 31 |
| 2.1097 | 0.0196 | 5.4844 | 0.0 | 32 |
| 2.1014 | 0.0179 | 5.4844 | 0.0024 | 33 |
| 2.0833 | 0.0177 | 5.4844 | 0.0024 | 34 |
| 2.0423 | 0.0201 | 5.4844 | 0.0 | 35 |
| 2.0163 | 0.0198 | 5.4844 | 0.0 | 36 |
| 1.9909 | 0.0168 | 5.4883 | 0.0 | 37 |
| 1.9774 | 0.0207 | 5.4805 | 0.0 | 38 |
| 1.9414 | 0.0207 | 5.4844 | 0.0 | 39 |
| 1.9206 | 0.0215 | 5.4766 | 0.0 | 40 |
| 1.8849 | 0.0182 | 5.4805 | 0.0 | 41 |
| 1.8732 | 0.0193 | 5.4648 | 0.0 | 42 |
| 1.8460 | 0.0160 | 5.4609 | 0.0 | 43 |
| 1.8171 | 0.0168 | 5.4648 | 0.0 | 44 |
| 1.7791 | 0.0201 | 5.4531 | 0.0 | 45 |
| 1.7583 | 0.0158 | 5.4570 | 0.0 | 46 |
| 1.7360 | 0.0171 | 5.4570 | 0.0 | 47 |
| 1.7061 | 0.0120 | 5.4297 | 0.0 | 48 |
| 1.6802 | 0.0155 | 5.4258 | 0.0 | 49 |
| 1.6551 | 0.0182 | 5.4141 | 0.0 | 50 |
| 1.6289 | 0.0130 | 5.4219 | 0.0 | 51 |
| 1.5981 | 0.0130 | 5.3945 | 0.0 | 52 |
| 1.5656 | 0.0128 | 5.4297 | 0.0 | 53 |
| 1.5535 | 0.0168 | 5.4219 | 0.0 | 54 |
| 1.5184 | 0.0141 | 5.4102 | 0.0 | 55 |
| 1.4943 | 0.0149 | 5.4023 | 0.0 | 56 |
| 1.4616 | 0.0122 | 5.4062 | 0.0 | 57 |
| 1.4344 | 0.0111 | 5.4062 | 0.0 | 58 |
| 1.3965 | 0.0111 | 5.4141 | 0.0 | 59 |
| 1.3643 | 0.0122 | 5.4375 | 0.0 | 60 |
| 1.3309 | 0.0087 | 5.4453 | 0.0 | 61 |
| 1.3215 | 0.0090 | 5.4648 | 0.0 | 62 |
| 1.3058 | 0.0084 | 5.4727 | 0.0 | 63 |
| 1.2700 | 0.0109 | 5.4453 | 0.0 | 64 |
| 1.2396 | 0.0079 | 5.4609 | 0.0 | 65 |
| 1.2189 | 0.0092 | 5.4375 | 0.0 | 66 |
| 1.1855 | 0.0079 | 5.4375 | 0.0 | 67 |
| 1.1592 | 0.0073 | 5.4375 | 0.0 | 68 |
| 1.1219 | 0.0071 | 5.4648 | 0.0 | 69 |
| 1.1071 | 0.0065 | 5.4570 | 0.0 | 70 |
| 1.0848 | 0.0060 | 5.4375 | 0.0 | 71 |
| 1.0581 | 0.0076 | 5.4453 | 0.0 | 72 |
| 1.0316 | 0.0090 | 5.4570 | 0.0 | 73 |
| 1.0068 | 0.0063 | 5.4219 | 0.0 | 74 |
| 0.9832 | 0.0060 | 5.4570 | 0.0 | 75 |
| 0.9534 | 0.0046 | 5.4570 | 0.0 | 76 |
| 0.9378 | 0.0057 | 5.4648 | 0.0 | 77 |
| 0.9170 | 0.0033 | 5.4844 | 0.0 | 78 |
| 0.8941 | 0.0041 | 5.4883 | 0.0 | 79 |
| 0.8666 | 0.0030 | 5.4922 | 0.0 | 80 |
| 0.8419 | 0.0054 | 5.4375 | 0.0 | 81 |
| 0.8200 | 0.0035 | 5.4492 | 0.0 | 82 |
| 0.8020 | 0.0022 | 5.4648 | 0.0 | 83 |
| 0.7785 | 0.0057 | 5.4883 | 0.0 | 84 |
| 0.7607 | 0.0052 | 5.4648 | 0.0 | 85 |
| 0.7454 | 0.0041 | 5.5078 | 0.0 | 86 |
| 0.7208 | 0.0024 | 5.5078 | 0.0 | 87 |
| 0.7040 | 0.0027 | 5.5078 | 0.0 | 88 |
| 0.6799 | 0.0041 | 5.5156 | 0.0 | 89 |
| 0.6594 | 0.0030 | 5.5312 | 0.0 | 90 |
| 0.6397 | 0.0030 | 5.5312 | 0.0 | 91 |
| 0.6217 | 0.0030 | 5.5195 | 0.0 | 92 |
| 0.6112 | 0.0033 | 5.5195 | 0.0 | 93 |
| 0.5937 | 0.0046 | 5.5625 | 0.0 | 94 |
| 0.5745 | 0.0035 | 5.5625 | 0.0 | 95 |
| 0.5616 | 0.0027 | 5.5586 | 0.0 | 96 |
| 0.5468 | 0.0043 | 5.5742 | 0.0 | 97 |
| 0.5354 | 0.0027 | 5.5781 | 0.0 | 98 |
| 0.5212 | 0.0027 | 5.5781 | 0.0 | 99 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Mattbrenr/What | Mattbrenr | 2022-10-20T14:07:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-10-20T14:07:37Z | ---
license: creativeml-openrail-m
---
|
auriolar/Reinformce-Pong-PLE-v0 | auriolar | 2022-10-20T14:07:27Z | 0 | 0 | null | [
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-20T14:07:14Z | ---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinformce-Pong-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
jeonsworld/ddpm-butterflies-128 | jeonsworld | 2022-10-20T13:56:19Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-10-20T12:40:13Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/jeonsworld/ddpm-butterflies-128/tensorboard?#scalars)
|
lewtun/quantized-distilbert-banking77 | lewtun | 2022-10-20T12:47:39Z | 13 | 0 | transformers | [
"transformers",
"onnx",
"text-classification",
"optimum",
"dataset:banking77",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-06-08T09:42:56Z | ---
tags:
- optimum
datasets:
- banking77
metrics:
- accuracy
model-index:
- name: quantized-distilbert-banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
metrics:
- name: Accuracy
type: accuracy
value: 0.9244
---
# Quantized-distilbert-banking77
This model is a dynamically quantized version of [optimum/distilbert-base-uncased-finetuned-banking77](https://huggingface.co/optimum/distilbert-base-uncased-finetuned-banking77) on the `banking77` dataset.
The model was created using the [dynamic-quantization](https://github.com/huggingface/workshops/tree/main/mlops-world) notebook from a workshop presented at MLOps World 2022.
It achieves the following results on the evaluation set:
**Accuracy**
- Vanilla model: 92.5%
- Quantized model: 92.44%
> The quantized model achieves 99.93% accuracy of the FP32 model
**Latency**
Payload sequence length: 128
Instance type: AWS c6i.xlarge
| latency | vanilla transformers | quantized optimum model | improvement |
|---------|----------------------|-------------------------|-------------|
| p95 | 63.24ms | 37.06ms | 1.71x |
| avg | 62.87ms | 37.93ms | 1.66x |
## How to use
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import pipeline, AutoTokenizer
model = ORTModelForSequenceClassification.from_pretrained("lewtun/quantized-distilbert-banking77")
tokenizer = AutoTokenizer.from_pretrained("lewtun/quantized-distilbert-banking77")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("What is the exchange rate like on this app?")
``` |
ChaosW/autohome-deberta-v2-xlarge-base | ChaosW | 2022-10-20T12:21:06Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"fill-mask",
"bert",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-20T12:19:19Z | ---
language:
- zh
license: apache-2.0
tags:
- bert
inference: true
widget:
- text: "生活的真谛是[MASK]。"
---
# Erlangshen-Deberta-97M-Chinese,one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
The 97 million parameter deberta-V2 base model, using 180G Chinese data, 24 A100(40G) training for 7 days,which is a encoder-only transformer structure. Consumed totally 1B samples.
## Task Description
Erlangshen-Deberta-97M-Chinese is pre-trained by bert like mask task from Deberta [paper](https://readpaper.com/paper/3033187248)
## Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline
import torch
tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese', use_fast=False)
model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese')
text = '生活的真谛是[MASK]。'
fillmask_pipe = FillMaskPipeline(model, tokenizer, device=7)
print(fillmask_pipe(text, top_k=10))
```
## Finetune
We present the dev results on some tasks.
| Model | OCNLI | CMNLI |
| ---------------------------------- | ----- | ------ |
| RoBERTa-base | 0.743 | 0.7973 |
| **Erlangshen-Deberta-97M-Chinese** | 0.752 | 0.807 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
knkarthick/Action_Items | knkarthick | 2022-10-20T12:10:12Z | 75 | 7 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"seq2seq",
"en",
"dataset:Custom",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-20T10:45:18Z | ---
language: en
tags:
- distilbert
- seq2seq
- text-classification
license: apache-2.0
datasets:
- Custom
metrics:
- Accuracy
- Precision
- Recall
widget:
- text: |-
Let's start the project as soon as possible as we are running out of deadline.
model-index:
- name: Action_Items
results:
- task:
name: Action Item Classification
type: text-classification
dataset:
name: Custom
type: custom
metrics:
- name: Validation Accuracy
type: accuracy
value:
- name: Validation Precision
type: precision
value:
- name: Validation Recall
type: recall
value:
- name: Test Accuracy
type: accuracy
value:
- name: Test Precision
type: precision
value:
- name: Test Recall
type: recall
value:
---
Model obtained by Fine Tuning 'distilbert' using Custom Dataset!
LABEL_0 - Not an Action Item
LABEL_1 - Action Item
## Usage
# Example 1
```python
from transformers import pipeline
summarizer = pipeline("text-classification", model="knkarthick/Action_Items")
text = '''
Customer portion will have the dependency of , you know , fifty five probably has to be on XGEVA before we can start that track , but we can at least start the enablement track for sales and CSM who are as important as customers because they're the top of our funnel , especially sales.
'''
summarizer(text)
```
# Example 2
```python
from transformers import pipeline
summarizer = pipeline("text-classification", model="knkarthick/Action_Items")
text = '''
India, officially the Republic of India, is a country in South Asia.
'''
summarizer(text)
```
# Example 3
```python
from transformers import pipeline
summarizer = pipeline("text-classification", model="knkarthick/Action_Items")
text = '''
We have been running the business successfully for over a decade now.
'''
summarizer(text)
``` |
bthomas/article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm | bthomas | 2022-10-20T12:04:52Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"mlm",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-20T09:46:19Z | ---
license: apache-2.0
tags:
- mlm
- generated_from_trainer
model-index:
- name: article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm
This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2976 | 1.0 | 1353 | 0.0543 |
| 0.0566 | 2.0 | 2706 | 0.0509 |
| 0.0487 | 3.0 | 4059 | 0.0458 |
| 0.0433 | 4.0 | 5412 | 0.0456 |
| 0.04 | 5.0 | 6765 | 0.0460 |
| 0.0373 | 6.0 | 8118 | 0.0454 |
| 0.0355 | 7.0 | 9471 | 0.0465 |
| 0.0328 | 8.0 | 10824 | 0.0474 |
| 0.0317 | 9.0 | 12177 | 0.0470 |
| 0.03 | 10.0 | 13530 | 0.0488 |
| 0.0285 | 11.0 | 14883 | 0.0489 |
| 0.0272 | 12.0 | 16236 | 0.0500 |
| 0.0262 | 13.0 | 17589 | 0.0510 |
| 0.0258 | 14.0 | 18942 | 0.0511 |
| 0.0245 | 15.0 | 20295 | 0.0522 |
| 0.0239 | 16.0 | 21648 | 0.0525 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
danielsaggau/lbert_scotus_classsification | danielsaggau | 2022-10-20T11:09:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:lex_glue",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-20T11:02:37Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- lex_glue
model-index:
- name: lbert_scotus_classsification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lbert_scotus_classsification
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
readerbench/RoSummary-medium | readerbench | 2022-10-20T10:00:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-19T06:32:45Z | Model card for RoSummary-medium
---
language:
- ro
---
# RoSummary
This is a version of the RoGPT2 model trained on the [AlephNews](https://huggingface.co/datasets/readerbench/AlephNews) dataset for the summarization task. There are 3 trained versions, they are available on the HuggingFace Hub:
* [base](https://huggingface.co/readerbench/RoSummary-base)
* [medium](https://huggingface.co/readerbench/RoSummary-medium)
* [large](https://huggingface.co/readerbench/RoSummary-large)
## Evaluation on [AlephNews](https://huggingface.co/datasets/readerbench/AlephNews)
| Model | Decode Method | | BERTScore | | | ROUGE | |
|:------:|:--------------:|:---------:|:---------:|:--------:|:--------:|:--------:|:--------:|
| | | Precision | Recall | F1-Score | ROUGE-1 | ROUGE-2 | ROUGE-L |
| | Greedy | 0.7335 | 0.7399 | 0.7358 | 0.3360 | 0.1862 | 0.3333 |
| Base | Beam Search | 0.7354 | 0.7468 | 0.7404 | 0.3480 | 0.1991 | 0.3416 |
| | Top-p Sampling | 0.7296 | 0.7299 | 0.7292 | 0.3058 | 0.1452 | 0.2951 |
| | Greedy | 0.7378 | 0.7401 | 0.7380 | 0.3422 | 0.1922 | 0.3394 |
| Medium | Beam Search | 0.7390 | **0.7493**|**0.7434**|**0.3546**|**0.2061**|**0.3467**|
| | Top-p Sampling | 0.7315 | 0.7285 | 0.7294 | 0.3042 | 0.1400 | 0.2921 |
| | Greedy | 0.7376 | 0.7424 | 0.7391 | 0.3414 | 0.1895 | 0.3355 |
| Large | Beam Search | **0.7394**| 0.7470 | 0.7424 | 0.3492 | 0.1995 | 0.3384 |
| | Top-p Sampling | 0.7311 | 0.7301 | 0.7299 | 0.3051 | 0.1418 | 0.2931 |
## Acknowledgments
---
Research supported with [Cloud TPUs](https://cloud.google.com/tpu/) from Google's [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc)
|
bthomas/article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm | bthomas | 2022-10-20T09:36:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"mlm",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-20T08:33:40Z | ---
license: apache-2.0
tags:
- mlm
- generated_from_trainer
model-index:
- name: article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3777 | 1.0 | 1353 | 0.3168 |
| 0.2358 | 2.0 | 2706 | 0.1564 |
| 0.1372 | 3.0 | 4059 | 0.1149 |
| 0.1046 | 4.0 | 5412 | 0.0956 |
| 0.086 | 5.0 | 6765 | 0.0853 |
| 0.0741 | 6.0 | 8118 | 0.0786 |
| 0.0653 | 7.0 | 9471 | 0.0750 |
| 0.0594 | 8.0 | 10824 | 0.0726 |
| 0.0542 | 9.0 | 12177 | 0.0699 |
| 0.0504 | 10.0 | 13530 | 0.0692 |
| 0.047 | 11.0 | 14883 | 0.0684 |
| 0.0444 | 12.0 | 16236 | 0.0675 |
| 0.0423 | 13.0 | 17589 | 0.0674 |
| 0.0404 | 14.0 | 18942 | 0.0673 |
| 0.0392 | 15.0 | 20295 | 0.0672 |
| 0.0379 | 16.0 | 21648 | 0.0673 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mprzibilla/super_large_finetune_CM01 | mprzibilla | 2022-10-20T09:04:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-19T23:12:30Z | ---
tags:
- generated_from_trainer
model-index:
- name: super_large_finetune_CM01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# super_large_finetune_CM01
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2285
- Wer: 0.7714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 15
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 857
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0031 | 5.0 | 1715 | 1.9766 | 0.7857 |
| 0.2107 | 10.0 | 3430 | 3.8748 | 0.8238 |
| 0.1393 | 15.0 | 5145 | 4.7403 | 0.7952 |
| 0.0931 | 20.0 | 6860 | 3.5077 | 0.6667 |
| 0.0649 | 25.0 | 8575 | 7.7419 | 0.9333 |
| 0.0592 | 30.0 | 10290 | 5.6440 | 0.7762 |
| 0.0396 | 35.0 | 12005 | 6.9629 | 0.6810 |
| 0.03 | 40.0 | 13720 | 7.8282 | 0.7524 |
| 0.0191 | 45.0 | 15435 | 6.4626 | 0.7429 |
| 0.0121 | 50.0 | 17150 | 7.2285 | 0.7714 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jayanta/vit-base-patch16-224-FV-20epochs-finetuned-memes | jayanta | 2022-10-20T08:21:22Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-20T07:39:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-patch16-224-FV-20epochs-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8632148377125193
- name: Precision
type: precision
value: 0.8617373130509159
- name: Recall
type: recall
value: 0.8632148377125193
- name: F1
type: f1
value: 0.8621436376894498
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-FV-20epochs-finetuned-memes
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6532
- Accuracy: 0.8632
- Precision: 0.8617
- Recall: 0.8632
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.1709 | 0.99 | 20 | 0.9393 | 0.6971 | 0.6896 | 0.6971 | 0.6890 |
| 0.5295 | 1.99 | 40 | 0.5024 | 0.8091 | 0.8210 | 0.8091 | 0.8133 |
| 0.2909 | 2.99 | 60 | 0.4070 | 0.8539 | 0.8529 | 0.8539 | 0.8529 |
| 0.1435 | 3.99 | 80 | 0.4136 | 0.8539 | 0.8522 | 0.8539 | 0.8522 |
| 0.0928 | 4.99 | 100 | 0.4495 | 0.8478 | 0.8548 | 0.8478 | 0.8507 |
| 0.0643 | 5.99 | 120 | 0.4897 | 0.8594 | 0.8572 | 0.8594 | 0.8573 |
| 0.061 | 6.99 | 140 | 0.5040 | 0.8423 | 0.8490 | 0.8423 | 0.8453 |
| 0.0519 | 7.99 | 160 | 0.5266 | 0.8524 | 0.8502 | 0.8524 | 0.8510 |
| 0.0546 | 8.99 | 180 | 0.5200 | 0.8586 | 0.8632 | 0.8586 | 0.8605 |
| 0.0478 | 9.99 | 200 | 0.5654 | 0.8555 | 0.8548 | 0.8555 | 0.8548 |
| 0.0509 | 10.99 | 220 | 0.5774 | 0.8609 | 0.8626 | 0.8609 | 0.8616 |
| 0.0467 | 11.99 | 240 | 0.5847 | 0.8594 | 0.8602 | 0.8594 | 0.8594 |
| 0.0468 | 12.99 | 260 | 0.5909 | 0.8601 | 0.8597 | 0.8601 | 0.8596 |
| 0.0469 | 13.99 | 280 | 0.5970 | 0.8563 | 0.8560 | 0.8563 | 0.8561 |
| 0.0438 | 14.99 | 300 | 0.6234 | 0.8594 | 0.8583 | 0.8594 | 0.8586 |
| 0.0441 | 15.99 | 320 | 0.6190 | 0.8563 | 0.8582 | 0.8563 | 0.8570 |
| 0.0431 | 16.99 | 340 | 0.6419 | 0.8570 | 0.8584 | 0.8570 | 0.8574 |
| 0.0454 | 17.99 | 360 | 0.6528 | 0.8563 | 0.8556 | 0.8563 | 0.8558 |
| 0.0417 | 18.99 | 380 | 0.6688 | 0.8578 | 0.8575 | 0.8578 | 0.8574 |
| 0.0432 | 19.99 | 400 | 0.6532 | 0.8632 | 0.8617 | 0.8632 | 0.8621 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
thisisHJLee/wav2vec2-large-xls-r-300m-korean-w1 | thisisHJLee | 2022-10-20T08:00:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-20T05:38:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-korean-w1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-korean-w1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1406
- Cer: 0.0393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 24.537 | 0.56 | 800 | 3.0461 | 0.9274 |
| 1.9309 | 1.13 | 1600 | 0.7723 | 0.2168 |
| 0.7595 | 1.69 | 2400 | 0.3197 | 0.0916 |
| 0.4338 | 2.26 | 3200 | 0.2051 | 0.0587 |
| 0.3067 | 2.82 | 4000 | 0.1406 | 0.0393 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
nayan06/binary-classifier-conversion-intent-1.1-l12 | nayan06 | 2022-10-20T07:05:09Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-18T11:34:15Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Setfit Classification Model ON Conversion Dataset With L12 sbert Model as Base
This is a Setfit Model with the L6 model as a Base for classification.
<!--- Describe your model here -->
## Usage (Setfit)
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("nayan06/binary-classifier-conversion-intent-1.1-l12")
prediction = model(['i want to buy thing'])
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2163 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2163,
"warmup_steps": 217,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Dataset Used
https://huggingface.co/datasets/nayan06/conversion1.0
## Citing & Authors
<!--- Describe where people can find more information --> |
debbiesoon/t5-small-T5_summarise | debbiesoon | 2022-10-20T06:09:39Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-20T05:53:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-T5_summarise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-T5_summarise
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0384
- Rouge1: 15.9638
- Rouge2: 9.0883
- Rougel: 13.2968
- Rougelsum: 14.5007
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 4.2781 | 1.0 | 2 | 5.0384 | 15.9638 | 9.0883 | 13.2968 | 14.5007 | 19.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
nguyenkhoa2407/bert-base-cased-NER-favsbot | nguyenkhoa2407 | 2022-10-20T05:11:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:favsbot",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-23T15:57:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- favsbot
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-NER-favsbot
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: favsbot
type: favsbot
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.8461538461538461
- name: Recall
type: recall
value: 0.88
- name: F1
type: f1
value: 0.8627450980392156
- name: Accuracy
type: accuracy
value: 0.9444444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-NER-favsbot
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the favsbot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1680
- Precision: 0.8462
- Recall: 0.88
- F1: 0.8627
- Accuracy: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 1.8761 | 0.0 | 0.0 | 0.0 | 0.5833 |
| No log | 2.0 | 14 | 1.3530 | 0.0 | 0.0 | 0.0 | 0.5972 |
| No log | 3.0 | 21 | 1.0400 | 1.0 | 0.12 | 0.2143 | 0.6389 |
| No log | 4.0 | 28 | 0.7987 | 0.7895 | 0.6 | 0.6818 | 0.8194 |
| No log | 5.0 | 35 | 0.6055 | 0.85 | 0.68 | 0.7556 | 0.875 |
| No log | 6.0 | 42 | 0.4749 | 0.8696 | 0.8 | 0.8333 | 0.9167 |
| No log | 7.0 | 49 | 0.3838 | 0.84 | 0.84 | 0.8400 | 0.9444 |
| No log | 8.0 | 56 | 0.3084 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 9.0 | 63 | 0.2643 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 10.0 | 70 | 0.2360 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 11.0 | 77 | 0.2168 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 12.0 | 84 | 0.2031 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 13.0 | 91 | 0.1937 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 14.0 | 98 | 0.1853 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 15.0 | 105 | 0.1791 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 16.0 | 112 | 0.1757 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 17.0 | 119 | 0.1718 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 18.0 | 126 | 0.1698 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 19.0 | 133 | 0.1686 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 20.0 | 140 | 0.1680 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
debbiesoon/summarise_v6 | debbiesoon | 2022-10-20T04:32:42Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-16T20:04:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: summarise_v6
results: []
---
# summarise_v6
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0497
- Rouge2 Precision: 0.3109
- Rouge2 Recall: 0.406
- Rouge2 Fmeasure: 0.3375
## Model description
More information needed
## Intended uses & limitations
max_input_length = 3072
max_output_length = 1000
led.config.max_length = 1000
led.config.min_length = 100
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.7163 | 0.22 | 10 | 1.2307 | 0.1428 | 0.5118 | 0.2089 |
| 1.632 | 0.44 | 20 | 1.1337 | 0.36 | 0.3393 | 0.3181 |
| 1.0916 | 0.67 | 30 | 1.0738 | 0.2693 | 0.3487 | 0.2731 |
| 1.573 | 0.89 | 40 | 1.0497 | 0.3109 | 0.406 | 0.3375 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
|
debbiesoon/summarise | debbiesoon | 2022-10-20T04:12:19Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-16T03:34:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: summarise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarise
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0497
- Rouge2 Precision: 0.3109
- Rouge2 Recall: 0.406
- Rouge2 Fmeasure: 0.3375
## Model description
More information needed
## Intended uses & limitations
max_input_length = 3072
max_output_length = 1000
led.config.max_length = 1000
led.config.min_length = 100
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.7163 | 0.22 | 10 | 1.2307 | 0.1428 | 0.5118 | 0.2089 |
| 1.632 | 0.44 | 20 | 1.1337 | 0.36 | 0.3393 | 0.3181 |
| 1.0916 | 0.67 | 30 | 1.0738 | 0.2693 | 0.3487 | 0.2731 |
| 1.573 | 0.89 | 40 | 1.0497 | 0.3109 | 0.406 | 0.3375 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
|
debbiesoon/longformer_summarise_large | debbiesoon | 2022-10-20T03:55:16Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-20T03:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: longformer_summarise_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer_summarise_large
This model is a fine-tuned version of [patrickvonplaten/led-large-16384-pubmed](https://huggingface.co/patrickvonplaten/led-large-16384-pubmed) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
|
tomjam/bert-finetuned-ner | tomjam | 2022-10-20T01:48:18Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-20T00:48:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9352911896465903
- name: Recall
type: recall
value: 0.9486704813194211
- name: F1
type: f1
value: 0.9419333277633887
- name: Accuracy
type: accuracy
value: 0.9864455171601814
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
- Precision: 0.9353
- Recall: 0.9487
- F1: 0.9419
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0907 | 1.0 | 1756 | 0.0732 | 0.9188 | 0.9337 | 0.9262 | 0.9818 |
| 0.035 | 2.0 | 3512 | 0.0607 | 0.9280 | 0.9480 | 0.9379 | 0.9859 |
| 0.0169 | 3.0 | 5268 | 0.0610 | 0.9353 | 0.9487 | 0.9419 | 0.9864 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
vwxyzjn/BreakoutNoFrameskip-v4-dqn_atari-seed1 | vwxyzjn | 2022-10-20T00:34:56Z | 0 | 0 | null | [
"tensorboard",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-20T00:34:52Z | ---
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 2.70 +/- 4.12
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **BreakoutNoFrameskip-v4**
This is a trained model of a DQN agent playing BreakoutNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py).
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'env_id': 'BreakoutNoFrameskip-v4',
'exp_name': 'dqn_atari',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': '',
'learning_rate': 0.0001,
'learning_starts': 80000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 1000,
'torch_deterministic': True,
'total_timesteps': 10000,
'track': False,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
MagoMerlot/PSFs_generated | MagoMerlot | 2022-10-19T23:46:03Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-10-19T05:43:58Z |
---
language: en
tags:
- diffusers
license: mit
--- |
spencer-gable-cook/COVID-19_Misinformation_Detector | spencer-gable-cook | 2022-10-19T22:53:16Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"onnx",
"bert",
"text-classification",
"arxiv:2006.00885",
"doi:10.57967/hf/3925",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T20:23:56Z | ---
license: mit
---
Welcome to the COVID-19 Misinformation Detector!
There is a lot of misinformation related to the COVID-19 vaccine being posted online from unreliable sources. The COVID-19 Misinformation Detector allows you to check if the information you are reading online (e.g. from Twitter or Facebook) contains misinformation or not!
Enter the text from the online post in the "Hosted inference API" text area to the right to check if it is misinformation. "LABEL_0" means that no misinformation was detected in the post, while "LABEL_1" means that the post is misinformation.
The COVID-19 Misinformation Detector is a modified version of the "bert-base-uncased" transformer model, found [here](https://huggingface.co/bert-base-uncased). It is fine-tuned on two datasets containing tweets relating to the COVID-19 pandemic; each tweet is labelled as containing misinformation (1) or not (0), as verified by healthcare experts.
The datasets used are:
1. [ANTi-Vax: a novel Twitter dataset for COVID-19 vaccine misinformation detection](https://www.sciencedirect.com/science/article/pii/S0033350621004534)
2. [CoAID (Covid-19 HeAlthcare mIsinformation Dataset)](https://arxiv.org/abs/2006.00885)
For a more detailed explanation, check out the technical report [here](https://drive.google.com/file/d/1QW9D6TN4KXX6poa6Q5L6FVgqaDQ4DxY9/view?usp=sharing), and check out my literature review on transformers [here](https://drive.google.com/file/d/1d5tK3sUwYM1WBheOuNG9A7ZYri2zxdyw/view?usp=sharing)!
|
AokiDaiki/distilbert-base-uncased-finetuned-emotion | AokiDaiki | 2022-10-19T22:45:33Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-18T18:31:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9270524571534725
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8148 | 1.0 | 250 | 0.3148 | 0.9 | 0.8967 |
| 0.2487 | 2.0 | 500 | 0.2174 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CavenLen/ddpm-Kaga-128 | CavenLen | 2022-10-19T22:03:31Z | 19 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:CavenLen/Kaga",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-10-17T12:48:44Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: CavenLen/Kaga
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-Kaga-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `CavenLen/Kaga` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/CavenLen/ddpm-Kaga-128/tensorboard?#scalars)
|
thucdangvan020999/marian-finetuned-kde4-en-to-fr | thucdangvan020999 | 2022-10-19T21:12:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-10-19T19:27:37Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.83113187001415
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8560
- Bleu: 52.8311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
mathislucka/tat-model | mathislucka | 2022-10-19T20:44:53Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-19T20:44:45Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# mathislucka/tat-model
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mathislucka/tat-model')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=mathislucka/tat-model)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 39 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mariolinml/deberta-v3-base_MNLI_10_19_v0 | mariolinml | 2022-10-19T20:07:22Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-19T15:57:15Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base_MNLI_10_19_v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base_MNLI_10_19_v0
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
rajesh426/distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_2_DISPLAY | rajesh426 | 2022-10-19T19:38:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-19T19:31:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_2_DISPLAY
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_2_DISPLAY
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0863
- Accuracy: 0.7368
- F1: 0.7114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0362 | 1.0 | 19 | 0.9281 | 0.5789 | 0.4964 |
| 0.9725 | 2.0 | 38 | 0.8906 | 0.6316 | 0.5707 |
| 0.8712 | 3.0 | 57 | 0.8080 | 0.6316 | 0.5889 |
| 0.6402 | 4.0 | 76 | 0.6386 | 0.7895 | 0.7474 |
| 0.4453 | 5.0 | 95 | 0.5401 | 0.7895 | 0.7485 |
| 0.2658 | 6.0 | 114 | 0.4999 | 0.8421 | 0.7990 |
| 0.1695 | 7.0 | 133 | 0.6248 | 0.7895 | 0.7427 |
| 0.0822 | 8.0 | 152 | 0.7391 | 0.7368 | 0.7114 |
| 0.0303 | 9.0 | 171 | 0.6665 | 0.7895 | 0.7485 |
| 0.016 | 10.0 | 190 | 0.8217 | 0.7368 | 0.7114 |
| 0.0103 | 11.0 | 209 | 0.8090 | 0.7368 | 0.7114 |
| 0.0083 | 12.0 | 228 | 0.8646 | 0.7368 | 0.7114 |
| 0.0068 | 13.0 | 247 | 0.9091 | 0.7368 | 0.7114 |
| 0.0059 | 14.0 | 266 | 0.8731 | 0.7368 | 0.7114 |
| 0.0049 | 15.0 | 285 | 0.9512 | 0.7368 | 0.7114 |
| 0.0048 | 16.0 | 304 | 0.9376 | 0.7368 | 0.7114 |
| 0.004 | 17.0 | 323 | 0.9507 | 0.7368 | 0.7114 |
| 0.0037 | 18.0 | 342 | 0.9868 | 0.7368 | 0.7114 |
| 0.0033 | 19.0 | 361 | 0.9862 | 0.7368 | 0.7114 |
| 0.0029 | 20.0 | 380 | 0.9733 | 0.7368 | 0.7114 |
| 0.0029 | 21.0 | 399 | 0.9747 | 0.7368 | 0.7114 |
| 0.0027 | 22.0 | 418 | 0.9998 | 0.7368 | 0.7114 |
| 0.0024 | 23.0 | 437 | 0.9984 | 0.7368 | 0.7114 |
| 0.0024 | 24.0 | 456 | 1.0270 | 0.7368 | 0.7114 |
| 0.0024 | 25.0 | 475 | 1.0083 | 0.7368 | 0.7114 |
| 0.0022 | 26.0 | 494 | 1.0167 | 0.7368 | 0.7114 |
| 0.0021 | 27.0 | 513 | 1.0273 | 0.7368 | 0.7114 |
| 0.002 | 28.0 | 532 | 1.0340 | 0.7368 | 0.7114 |
| 0.0021 | 29.0 | 551 | 1.0282 | 0.7368 | 0.7114 |
| 0.002 | 30.0 | 570 | 1.0372 | 0.7368 | 0.7114 |
| 0.0019 | 31.0 | 589 | 1.0593 | 0.7368 | 0.7114 |
| 0.0017 | 32.0 | 608 | 1.0841 | 0.7368 | 0.7114 |
| 0.0018 | 33.0 | 627 | 1.0920 | 0.7368 | 0.7114 |
| 0.0019 | 34.0 | 646 | 1.0943 | 0.7368 | 0.7114 |
| 0.0018 | 35.0 | 665 | 1.0883 | 0.7368 | 0.7114 |
| 0.0017 | 36.0 | 684 | 1.0864 | 0.7368 | 0.7114 |
| 0.0016 | 37.0 | 703 | 1.0890 | 0.7368 | 0.7114 |
| 0.0017 | 38.0 | 722 | 1.0894 | 0.7368 | 0.7114 |
| 0.0015 | 39.0 | 741 | 1.0867 | 0.7368 | 0.7114 |
| 0.0016 | 40.0 | 760 | 1.0863 | 0.7368 | 0.7114 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.2
- Datasets 2.5.2
- Tokenizers 0.12.1
|
api19750904/situaciones-turismo | api19750904 | 2022-10-19T17:59:42Z | 40 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-19T17:59:26Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: situaciones-turismo
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9101123809814453
---
# situaciones-turismo
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### people beach

#### people party

#### people restaurant

#### people walking
 |
api19750904/comida-vgm | api19750904 | 2022-10-19T16:54:30Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-19T16:54:16Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: comida-vgm
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9550561904907227
---
# comida-vgm
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### burguer

#### macarroni

#### pizza

#### spaguetti
 |
huggingtweets/konradha_ | huggingtweets | 2022-10-19T16:11:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-19T16:09:29Z | ---
language: en
thumbnail: http://www.huggingtweets.com/konradha_/1666195856134/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1540685336422088704/JDxiybNe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Konrad</div>
<div style="text-align: center; font-size: 14px;">@konradha_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Konrad.
| Data | Konrad |
| --- | --- |
| Tweets downloaded | 256 |
| Retweets | 38 |
| Short tweets | 75 |
| Tweets kept | 143 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ox7i4yk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @konradha_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10k5hc9s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10k5hc9s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/konradha_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gabski/sbert-relative-claim-quality | gabski | 2022-10-19T16:10:59Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-19T15:59:40Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Model
This [sentence-transformers](https://www.SBERT.net) model model was obtained by fine-tuning bert-base-cased on the ClaimRev dataset.
Paper: [Learning From Revisions: Quality Assessment of Claims in Argumentation at Scale](https://aclanthology.org/2021.eacl-main.147/)
Authors: Gabriella Skitalinskaya, Jonas Klaff, Henning Wachsmuth
# Claim Quality Classification
We cast this task as a pairwise classification task, where the objective is to compare two versions of the same claim and determine which one is better. We train this model by fine-tuning SBERT based on bert-base-cased using a siamese network structure with softmax loss. Outputs can also be used to rank multiple versions of the same claim, for example, using [SVMRank](https://github.com/ds4dm/PySVMRank) or BTL (Bradley-Terry-Luce model).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gabski/sbert-relative-claim-quality')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gabski/sbert-relative-claim-quality')
model = AutoModel.from_pretrained('gabski/sbert-relative-claim-quality')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@inproceedings{skitalinskaya-etal-2021-learning,
title = "Learning From Revisions: Quality Assessment of Claims in Argumentation at Scale",
author = "Skitalinskaya, Gabriella and
Klaff, Jonas and
Wachsmuth, Henning",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-main.147",
doi = "10.18653/v1/2021.eacl-main.147",
pages = "1718--1729",
}
``` |
gabski/bert-relative-claim-quality | gabski | 2022-10-19T16:09:19Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:ClaimRev",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-19T14:04:22Z | ---
language: en
license: cc-by-nc-sa-4.0
datasets:
- ClaimRev
---
# Model
This model was obtained by fine-tuning bert-base-cased on the ClaimRev dataset.
Paper: [Learning From Revisions: Quality Assessment of Claims in Argumentation at Scale](https://aclanthology.org/2021.eacl-main.147/)
Authors: Gabriella Skitalinskaya, Jonas Klaff, Henning Wachsmuth
# Claim Quality Classification
We cast this task as a pairwise classification task, where the objective is to compare two versions of the same claim and determine which one is better.
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("gabski/bert-relative-claim-quality")
model = AutoModelForSequenceClassification.from_pretrained("gabski/bert-relative-claim-quality")
claim_1 = 'Smoking marijuana is less harmfull then smoking cigarettes.'
claim_2 = 'Smoking marijuana is less harmful than smoking cigarettes.'
model_input = tokenizer(claim_1,claim_2, return_tensors='pt')
model_outputs = model(**model_input)
outputs = torch.nn.functional.softmax(model_outputs.logits, dim = -1)
print(outputs)
```
|
rakamsata/anim | rakamsata | 2022-10-19T15:46:31Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2022-10-19T15:46:31Z | ---
license: bigscience-openrail-m
---
|
smz2122/image | smz2122 | 2022-10-19T15:37:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-10-19T15:37:20Z | git clone https://huggingface.co/templates/text-to-image
cd text-to-image
git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME
git push --force |
enryu43/anifusion_unet | enryu43 | 2022-10-19T15:01:54Z | 15 | 6 | diffusers | [
"diffusers",
"diffusers:LDMTextToImagePipeline",
"region:us"
]
| null | 2022-10-11T21:01:02Z | This model is converted with https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py.
However, the tokenizer in the diffuser model is wrong, for proper usage, see description at https://medium.com/@enryu9000/anifusion-diffusion-models-for-anime-pictures-138cf1af2cbe, and instructions/examples at https://github.com/enryu43/anifusion-stable-diffusion.
Also, the original checkpoint in the Latent Diffusion format is available.
Installation instructions for webui: https://gist.github.com/enryu43/858999bf69dc92b97fdad6137c3c45e6
|
bthomas/article2keyword2.2_barthez-orangesum-title_finetuned_for_mlm | bthomas | 2022-10-19T14:48:17Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"mlm",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-19T14:32:22Z | ---
license: apache-2.0
tags:
- mlm
- generated_from_trainer
model-index:
- name: article2keyword2.2_barthez-orangesum-title_finetuned_for_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# article2keyword2.2_barthez-orangesum-title_finetuned_for_mlm
This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3187 | 1.0 | 1235 | 0.0545 |
| 0.0544 | 2.0 | 2470 | 0.0491 |
| 0.0461 | 3.0 | 3705 | 0.0463 |
| 0.042 | 4.0 | 4940 | 0.0452 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/moonideograph | huggingtweets | 2022-10-19T14:31:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-19T14:28:14Z | ---
language: en
thumbnail: http://www.huggingtweets.com/moonideograph/1666189855449/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1581258561400848384/ktYtGqLD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🌑 Loona the Ninth</div>
<div style="text-align: center; font-size: 14px;">@moonideograph</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🌑 Loona the Ninth.
| Data | 🌑 Loona the Ninth |
| --- | --- |
| Tweets downloaded | 409 |
| Retweets | 104 |
| Short tweets | 22 |
| Tweets kept | 283 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8mujtj4v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @moonideograph's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21pia0le) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21pia0le/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/moonideograph')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
facebook/xm_transformer_unity_hk-en | facebook | 2022-10-19T14:28:29Z | 39 | 7 | fairseq | [
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
]
| audio-to-audio | 2022-10-08T00:55:30Z | ---
license: cc-by-nc-4.0
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
datasets:
- MuST-C
- TAT
- Hokkien dramas
---
## xm_transformer_unity_hk-en
Speech-to-speech translation model with two-pass decoder (UnitY) from fairseq:
- Hokkien-English
- Trained with supervised data in TED, drama, [TAT](https://sites.google.com/speech.ntut.edu.tw/fsw/home/tat-corpus) domain, and weakly supervised data in drama domain. See [here](https://research.facebook.com/publications/hokkien-direct-speech-to-speech-translation)
for training details.
- Speech synthesis with [facebook/unit_hifigan_mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj_dur](https://huggingface.co/facebook/unit_hifigan_mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj_dur)
- [Project Page](https://github.com/facebookresearch/fairseq/tree/ust/examples/hokkien)
## Usage
```python
import json
import os
from pathlib import Path
import IPython.display as ipd
from fairseq import hub_utils
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech import CodeHiFiGANVocoder
from fairseq.models.text_to_speech.hub_interface import VocoderHubInterface
from huggingface_hub import snapshot_download
import torchaudio
cache_dir = os.getenv("HUGGINGFACE_HUB_CACHE")
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_unity_hk-en",
arg_overrides={"config_yaml": "config.yaml", "task": "speech_to_text"},
cache_dir=cache_dir,
)
#model = models[0].cpu()
#cfg["task"].cpu = True
generator = task.build_generator([model], cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
unit = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
library_name = "fairseq"
cache_dir = (
cache_dir or (Path.home() / ".cache" / library_name).as_posix()
)
cache_dir = snapshot_download(
f"facebook/unit_hifigan_mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj_dur", cache_dir=cache_dir, library_name=library_name
)
x = hub_utils.from_pretrained(
cache_dir,
"model.pt",
".",
archive_map=CodeHiFiGANVocoder.hub_models(),
config_yaml="config.json",
fp16=False,
is_vocoder=True,
)
with open(f"{x['args']['data']}/config.json") as f:
vocoder_cfg = json.load(f)
assert (
len(x["args"]["model_path"]) == 1
), "Too many vocoder models in the input"
vocoder = CodeHiFiGANVocoder(x["args"]["model_path"][0], vocoder_cfg)
tts_model = VocoderHubInterface(vocoder_cfg, vocoder)
tts_sample = tts_model.get_model_input(unit)
wav, sr = tts_model.get_prediction(tts_sample)
ipd.Audio(wav, rate=sr)
``` |
mclarknc/ppo-LunarLander-v2 | mclarknc | 2022-10-19T14:03:08Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-19T14:02:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.18 +/- 23.61
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yuk/my-gothic-waifu-diffusion | yuk | 2022-10-19T13:35:02Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2022-10-19T13:35:02Z | ---
license: bigscience-bloom-rail-1.0
---
|
kjhanjee/autotrain-code_classification-1815762639 | kjhanjee | 2022-10-19T11:01:40Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:kjhanjee/autotrain-data-code_classification",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-19T10:56:20Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- kjhanjee/autotrain-data-code_classification
co2_eq_emissions:
emissions: 11.438220107218369
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1815762639
- CO2 Emissions (in grams): 11.4382
## Validation Metrics
- Loss: 0.849
- Accuracy: 0.794
- Macro F1: 0.788
- Micro F1: 0.794
- Weighted F1: 0.788
- Macro Precision: 0.797
- Micro Precision: 0.794
- Weighted Precision: 0.797
- Macro Recall: 0.794
- Micro Recall: 0.794
- Weighted Recall: 0.794
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/kjhanjee/autotrain-code_classification-1815762639
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kjhanjee/autotrain-code_classification-1815762639", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kjhanjee/autotrain-code_classification-1815762639", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
TestZee/t5-small-baseline_summary_zee_v1.0 | TestZee | 2022-10-19T10:39:31Z | 9 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-19T10:25:30Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TestZee/t5-small-baseline_summary_zee_v1.0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TestZee/t5-small-baseline_summary_zee_v1.0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3722
- Validation Loss: 2.1596
- Train Rouge1: 21.6350
- Train Rouge2: 8.9453
- Train Rougel: 17.8649
- Train Rougelsum: 19.9099
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.3722 | 2.1596 | 21.6350 | 8.9453 | 17.8649 | 19.9099 | 19.0 | 0 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
gaioNL/LunarLander-v2 | gaioNL | 2022-10-19T09:43:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-19T09:10:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.56 +/- 28.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
amichailidis/greek_legal_bert_v2-finetuned-ner-V2 | amichailidis | 2022-10-19T09:27:25Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-11T09:10:51Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: greek_legal_bert_v2-finetuned-ner-V3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# greek_legal_bert_v2-finetuned-ner-V3
This model is a fine-tuned version of [alexaapo/greek_legal_bert_v2](https://huggingface.co/alexaapo/greek_legal_bert_v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0907
- Precision: 0.9023
- Recall: 0.9265
- F1: 0.9142
- Accuracy: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.19 | 25 | 0.0661 | 0.8895 | 0.9229 | 0.9059 | 0.9813 |
| No log | 2.38 | 50 | 0.0820 | 0.9091 | 0.9319 | 0.9204 | 0.9838 |
| No log | 3.57 | 75 | 0.0791 | 0.8924 | 0.9211 | 0.9065 | 0.9825 |
| No log | 4.76 | 100 | 0.0824 | 0.8950 | 0.9319 | 0.9131 | 0.9841 |
| No log | 5.95 | 125 | 0.0820 | 0.8830 | 0.9194 | 0.9008 | 0.9812 |
| No log | 7.14 | 150 | 0.0862 | 0.9059 | 0.9319 | 0.9187 | 0.9817 |
| No log | 8.33 | 175 | 0.0915 | 0.9021 | 0.9247 | 0.9133 | 0.9826 |
| No log | 9.52 | 200 | 0.0905 | 0.9023 | 0.9265 | 0.9142 | 0.9828 |
### Framework versions
- Transformers 4.23.0
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1 |
amichailidis/greek_legal_bert_v2-finetuned-ner | amichailidis | 2022-10-19T09:21:07Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-08T09:21:14Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: greek_legal_bert_v2-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# greek_legal_bert_v2-finetuned-ner
This model is a fine-tuned version of [alexaapo/greek_legal_bert_v2](https://huggingface.co/alexaapo/greek_legal_bert_v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0900
- Precision: 0.8424
- Recall: 0.8638
- F1: 0.8530
- Accuracy: 0.9775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.64 | 250 | 0.0839 | 0.7859 | 0.8539 | 0.8185 | 0.9737 |
| 0.1127 | 1.29 | 500 | 0.0783 | 0.8092 | 0.8569 | 0.8324 | 0.9759 |
| 0.1127 | 1.93 | 750 | 0.0743 | 0.8284 | 0.8446 | 0.8364 | 0.9766 |
| 0.0538 | 2.58 | 1000 | 0.0816 | 0.8243 | 0.8597 | 0.8416 | 0.9774 |
| 0.0538 | 3.22 | 1250 | 0.0900 | 0.8424 | 0.8638 | 0.8530 | 0.9776 |
| 0.0346 | 3.87 | 1500 | 0.0890 | 0.8401 | 0.8597 | 0.8498 | 0.9770 |
| 0.0346 | 4.51 | 1750 | 0.0964 | 0.8342 | 0.8576 | 0.8457 | 0.9768 |
| 0.0233 | 5.15 | 2000 | 0.1094 | 0.8336 | 0.8645 | 0.8488 | 0.9768 |
| 0.0233 | 5.8 | 2250 | 0.1110 | 0.8456 | 0.8549 | 0.8502 | 0.9777 |
| 0.0161 | 6.44 | 2500 | 0.1224 | 0.8408 | 0.8535 | 0.8471 | 0.9769 |
| 0.0161 | 7.09 | 2750 | 0.1281 | 0.8347 | 0.8624 | 0.8483 | 0.9770 |
| 0.0114 | 7.73 | 3000 | 0.1268 | 0.8397 | 0.8573 | 0.8484 | 0.9773 |
| 0.0114 | 8.38 | 3250 | 0.1308 | 0.8388 | 0.8549 | 0.8468 | 0.9771 |
| 0.0088 | 9.02 | 3500 | 0.1301 | 0.8412 | 0.8559 | 0.8485 | 0.9772 |
| 0.0088 | 9.66 | 3750 | 0.1368 | 0.8396 | 0.8604 | 0.8499 | 0.9772 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pcoloc/autotrain-only-rssi-1813762559 | pcoloc | 2022-10-19T08:57:26Z | 7 | 0 | transformers | [
"transformers",
"joblib",
"autotrain",
"tabular",
"regression",
"tabular-regression",
"dataset:pcoloc/autotrain-data-only-rssi",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| tabular-regression | 2022-10-19T08:55:40Z | ---
tags:
- autotrain
- tabular
- regression
- tabular-regression
datasets:
- pcoloc/autotrain-data-only-rssi
co2_eq_emissions:
emissions: 1.3554114117578944
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1813762559
- CO2 Emissions (in grams): 1.3554
## Validation Metrics
- Loss: 83.432
- R2: 0.312
- MSE: 6960.888
- MAE: 60.449
- RMSLE: 0.532
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
``` |
mriggs/byt5-small-finetuned-1epoch-batch16-opus_books-en-to-it | mriggs | 2022-10-19T08:42:40Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-19T07:20:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: byt5-small-finetuned-1epoch-batch16-opus_books-en-to-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-small-finetuned-1epoch-batch16-opus_books-en-to-it
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3771 | 1.0 | 1819 | 0.9848 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
amichailidis/bert-base-greek-uncased-v1-finetuned-ner | amichailidis | 2022-10-19T08:32:01Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-19T08:00:16Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-greek-uncased-v1-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-greek-uncased-v1-finetuned-ner
This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1052
- Precision: 0.8440
- Recall: 0.8566
- F1: 0.8503
- Accuracy: 0.9768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.64 | 250 | 0.0913 | 0.7814 | 0.8208 | 0.8073 | 0.9728 |
| 0.1144 | 1.29 | 500 | 0.0823 | 0.7940 | 0.8448 | 0.8342 | 0.9755 |
| 0.1144 | 1.93 | 750 | 0.0812 | 0.8057 | 0.8212 | 0.8328 | 0.9751 |
| 0.0570 | 2.58 | 1000 | 0.0855 | 0.8244 | 0.8514 | 0.8292 | 0.9744 |
| 0.0570 | 3.22 | 1250 | 0.0926 | 0.8329 | 0.8441 | 0.8397 | 0.9760 |
| 0.0393 | 3.87 | 1500 | 0.0869 | 0.8256 | 0.8633 | 0.8440 | 0.9774 |
| 0.0393 | 4.51 | 1750 | 0.1049 | 0.8290 | 0.8636 | 0.8459 | 0.9766 |
| 0.026 | 5.15 | 2000 | 0.1093 | 0.8440 | 0.8566 | 0.8503 | 0.9768 |
| 0.026 | 5.8 | 2250 | 0.1172 | 0.8301 | 0.8514 | 0.8406 | 0.9760 |
| 0.0189 | 6.44 | 2500 | 0.1273 | 0.8238 | 0.8688 | 0.8457 | 0.9766 |
| 0.0189 | 7.09 | 2750 | 0.1246 | 0.8350 | 0.8539 | 0.8443 | 0.9764 |
| 0.0148 | 7.73 | 3000 | 0.1262 | 0.8333 | 0.8608 | 0.8468 | 0.9764 |
| 0.0148 | 8.38 | 3250 | 0.1347 | 0.8319 | 0.8591 | 0.8453 | 0.9762 |
| 0.0010 | 9.02 | 3500 | 0.1325 | 0.8376 | 0.8504 | 0.8439 | 0.9766 |
| 0.0010 | 9.66 | 3750 | 0.1362 | 0.8371 | 0.8563 | 0.8466 | 0.9765 |
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
crumb/eva-model-ckpt | crumb | 2022-10-19T08:11:22Z | 0 | 2 | null | [
"region:us"
]
| null | 2022-10-19T04:45:12Z | storage for eva models. it has intermediate low-performing models |
Subsets and Splits