modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 06:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jordyvl/vit-small_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone | jordyvl | 2023-07-11T21:05:03Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-10T22:41:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_simkd_CEKD_tNone_aNone_tNone_gNone
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0379
- Accuracy: 0.8
- Brier Loss: 0.6938
- Nll: 1.3290
- F1 Micro: 0.8000
- F1 Macro: 0.7859
- Ece: 0.5869
- Aurc: 0.0931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 0.0506 | 0.09 | 0.8991 | 6.5155 | 0.09 | 0.0484 | 0.1622 | 0.8986 |
| No log | 2.0 | 50 | 0.0468 | 0.22 | 0.8982 | 4.6950 | 0.22 | 0.1025 | 0.2491 | 0.7656 |
| No log | 3.0 | 75 | 0.0463 | 0.29 | 0.8969 | 3.3099 | 0.29 | 0.1676 | 0.2924 | 0.6888 |
| No log | 4.0 | 100 | 0.0459 | 0.37 | 0.8954 | 3.2920 | 0.37 | 0.1891 | 0.3517 | 0.4208 |
| No log | 5.0 | 125 | 0.0455 | 0.395 | 0.8929 | 3.2550 | 0.395 | 0.2299 | 0.3759 | 0.3617 |
| No log | 6.0 | 150 | 0.0449 | 0.49 | 0.8885 | 2.9109 | 0.49 | 0.3135 | 0.4396 | 0.2804 |
| No log | 7.0 | 175 | 0.0441 | 0.495 | 0.8796 | 2.8950 | 0.495 | 0.3248 | 0.4360 | 0.2721 |
| No log | 8.0 | 200 | 0.0430 | 0.545 | 0.8619 | 2.5199 | 0.545 | 0.3771 | 0.4777 | 0.2129 |
| No log | 9.0 | 225 | 0.0418 | 0.62 | 0.8382 | 2.2126 | 0.62 | 0.4291 | 0.5298 | 0.1659 |
| No log | 10.0 | 250 | 0.0409 | 0.645 | 0.8137 | 2.2525 | 0.645 | 0.4947 | 0.5293 | 0.1552 |
| No log | 11.0 | 275 | 0.0401 | 0.68 | 0.7863 | 2.4423 | 0.68 | 0.5145 | 0.5433 | 0.1215 |
| No log | 12.0 | 300 | 0.0392 | 0.68 | 0.7628 | 1.9779 | 0.68 | 0.5373 | 0.5402 | 0.1172 |
| No log | 13.0 | 325 | 0.0385 | 0.745 | 0.7350 | 1.8986 | 0.745 | 0.6126 | 0.5806 | 0.0843 |
| No log | 14.0 | 350 | 0.0384 | 0.735 | 0.7268 | 1.9922 | 0.735 | 0.6451 | 0.5466 | 0.0997 |
| No log | 15.0 | 375 | 0.0381 | 0.745 | 0.7180 | 1.6965 | 0.745 | 0.6627 | 0.5586 | 0.0761 |
| No log | 16.0 | 400 | 0.0377 | 0.805 | 0.7031 | 1.2564 | 0.805 | 0.7353 | 0.6034 | 0.0713 |
| No log | 17.0 | 425 | 0.0389 | 0.745 | 0.7303 | 1.5063 | 0.745 | 0.7192 | 0.5779 | 0.0705 |
| No log | 18.0 | 450 | 0.0387 | 0.765 | 0.7219 | 1.5776 | 0.765 | 0.7703 | 0.5815 | 0.0923 |
| No log | 19.0 | 475 | 0.0383 | 0.805 | 0.7213 | 1.3953 | 0.805 | 0.7906 | 0.6159 | 0.0667 |
| 0.0432 | 20.0 | 500 | 0.0377 | 0.835 | 0.6952 | 1.3075 | 0.835 | 0.8271 | 0.6116 | 0.0799 |
| 0.0432 | 21.0 | 525 | 0.0381 | 0.795 | 0.7018 | 1.6184 | 0.795 | 0.7723 | 0.5851 | 0.0880 |
| 0.0432 | 22.0 | 550 | 0.0378 | 0.81 | 0.6984 | 1.4292 | 0.81 | 0.7950 | 0.6103 | 0.0673 |
| 0.0432 | 23.0 | 575 | 0.0380 | 0.805 | 0.6976 | 1.4852 | 0.805 | 0.7951 | 0.5942 | 0.0808 |
| 0.0432 | 24.0 | 600 | 0.0377 | 0.825 | 0.6907 | 1.4501 | 0.825 | 0.8103 | 0.6020 | 0.0774 |
| 0.0432 | 25.0 | 625 | 0.0377 | 0.83 | 0.6920 | 1.4509 | 0.83 | 0.8148 | 0.6038 | 0.0759 |
| 0.0432 | 26.0 | 650 | 0.0377 | 0.825 | 0.6927 | 1.4113 | 0.825 | 0.8114 | 0.6072 | 0.0765 |
| 0.0432 | 27.0 | 675 | 0.0377 | 0.825 | 0.6924 | 1.4044 | 0.825 | 0.8114 | 0.6057 | 0.0757 |
| 0.0432 | 28.0 | 700 | 0.0377 | 0.82 | 0.6932 | 1.4521 | 0.82 | 0.8061 | 0.6017 | 0.0815 |
| 0.0432 | 29.0 | 725 | 0.0377 | 0.82 | 0.6932 | 1.3593 | 0.82 | 0.8080 | 0.5983 | 0.0794 |
| 0.0432 | 30.0 | 750 | 0.0377 | 0.82 | 0.6926 | 1.3437 | 0.82 | 0.8069 | 0.6042 | 0.0819 |
| 0.0432 | 31.0 | 775 | 0.0377 | 0.815 | 0.6932 | 1.3453 | 0.815 | 0.8027 | 0.5988 | 0.0815 |
| 0.0432 | 32.0 | 800 | 0.0377 | 0.82 | 0.6930 | 1.3384 | 0.82 | 0.8029 | 0.6044 | 0.0855 |
| 0.0432 | 33.0 | 825 | 0.0377 | 0.81 | 0.6928 | 1.3969 | 0.81 | 0.7927 | 0.5929 | 0.0835 |
| 0.0432 | 34.0 | 850 | 0.0378 | 0.805 | 0.6927 | 1.3995 | 0.805 | 0.7886 | 0.5961 | 0.0855 |
| 0.0432 | 35.0 | 875 | 0.0377 | 0.81 | 0.6927 | 1.3705 | 0.81 | 0.7979 | 0.5910 | 0.0887 |
| 0.0432 | 36.0 | 900 | 0.0378 | 0.805 | 0.6930 | 1.3566 | 0.805 | 0.7886 | 0.5850 | 0.0817 |
| 0.0432 | 37.0 | 925 | 0.0377 | 0.82 | 0.6927 | 1.3537 | 0.82 | 0.8022 | 0.5936 | 0.0847 |
| 0.0432 | 38.0 | 950 | 0.0377 | 0.815 | 0.6930 | 1.3574 | 0.815 | 0.7978 | 0.5976 | 0.0854 |
| 0.0432 | 39.0 | 975 | 0.0377 | 0.815 | 0.6932 | 1.4599 | 0.815 | 0.7978 | 0.5955 | 0.0864 |
| 0.035 | 40.0 | 1000 | 0.0377 | 0.815 | 0.6926 | 1.4147 | 0.815 | 0.7978 | 0.5990 | 0.0869 |
| 0.035 | 41.0 | 1025 | 0.0377 | 0.81 | 0.6931 | 1.4065 | 0.81 | 0.7943 | 0.5966 | 0.0844 |
| 0.035 | 42.0 | 1050 | 0.0378 | 0.81 | 0.6929 | 1.4678 | 0.81 | 0.7961 | 0.5902 | 0.0891 |
| 0.035 | 43.0 | 1075 | 0.0378 | 0.81 | 0.6927 | 1.4164 | 0.81 | 0.7971 | 0.5951 | 0.0897 |
| 0.035 | 44.0 | 1100 | 0.0378 | 0.81 | 0.6930 | 1.4646 | 0.81 | 0.7961 | 0.5948 | 0.0875 |
| 0.035 | 45.0 | 1125 | 0.0378 | 0.815 | 0.6921 | 1.4660 | 0.815 | 0.8004 | 0.6024 | 0.0895 |
| 0.035 | 46.0 | 1150 | 0.0378 | 0.81 | 0.6929 | 1.4098 | 0.81 | 0.7961 | 0.5987 | 0.0831 |
| 0.035 | 47.0 | 1175 | 0.0378 | 0.815 | 0.6928 | 1.4634 | 0.815 | 0.8004 | 0.5963 | 0.0911 |
| 0.035 | 48.0 | 1200 | 0.0378 | 0.81 | 0.6932 | 1.4648 | 0.81 | 0.7961 | 0.5841 | 0.0877 |
| 0.035 | 49.0 | 1225 | 0.0378 | 0.81 | 0.6928 | 1.4635 | 0.81 | 0.7961 | 0.5955 | 0.0898 |
| 0.035 | 50.0 | 1250 | 0.0378 | 0.805 | 0.6935 | 1.4688 | 0.805 | 0.7882 | 0.5795 | 0.0902 |
| 0.035 | 51.0 | 1275 | 0.0378 | 0.805 | 0.6928 | 1.4665 | 0.805 | 0.7882 | 0.5848 | 0.0916 |
| 0.035 | 52.0 | 1300 | 0.0378 | 0.81 | 0.6925 | 1.4249 | 0.81 | 0.7961 | 0.5869 | 0.0926 |
| 0.035 | 53.0 | 1325 | 0.0378 | 0.815 | 0.6926 | 1.4150 | 0.815 | 0.8021 | 0.5934 | 0.0913 |
| 0.035 | 54.0 | 1350 | 0.0378 | 0.81 | 0.6929 | 1.4155 | 0.81 | 0.7961 | 0.5943 | 0.0913 |
| 0.035 | 55.0 | 1375 | 0.0378 | 0.805 | 0.6928 | 1.4141 | 0.805 | 0.7882 | 0.5934 | 0.0964 |
| 0.035 | 56.0 | 1400 | 0.0378 | 0.805 | 0.6930 | 1.4124 | 0.805 | 0.7882 | 0.5926 | 0.0958 |
| 0.035 | 57.0 | 1425 | 0.0378 | 0.81 | 0.6935 | 1.4116 | 0.81 | 0.7934 | 0.6002 | 0.0895 |
| 0.035 | 58.0 | 1450 | 0.0378 | 0.805 | 0.6928 | 1.4059 | 0.805 | 0.7882 | 0.5890 | 0.0937 |
| 0.035 | 59.0 | 1475 | 0.0378 | 0.805 | 0.6929 | 1.4141 | 0.805 | 0.7882 | 0.5918 | 0.0967 |
| 0.0348 | 60.0 | 1500 | 0.0378 | 0.81 | 0.6935 | 1.4086 | 0.81 | 0.7934 | 0.5915 | 0.0934 |
| 0.0348 | 61.0 | 1525 | 0.0378 | 0.81 | 0.6930 | 1.4105 | 0.81 | 0.7941 | 0.5954 | 0.0961 |
| 0.0348 | 62.0 | 1550 | 0.0378 | 0.81 | 0.6933 | 1.4166 | 0.81 | 0.7941 | 0.5889 | 0.0954 |
| 0.0348 | 63.0 | 1575 | 0.0378 | 0.81 | 0.6933 | 1.4109 | 0.81 | 0.7934 | 0.5963 | 0.0975 |
| 0.0348 | 64.0 | 1600 | 0.0378 | 0.81 | 0.6932 | 1.4131 | 0.81 | 0.7934 | 0.5980 | 0.0953 |
| 0.0348 | 65.0 | 1625 | 0.0378 | 0.81 | 0.6937 | 1.4182 | 0.81 | 0.7934 | 0.5956 | 0.0970 |
| 0.0348 | 66.0 | 1650 | 0.0378 | 0.805 | 0.6933 | 1.4125 | 0.805 | 0.7893 | 0.5905 | 0.0966 |
| 0.0348 | 67.0 | 1675 | 0.0378 | 0.81 | 0.6937 | 1.4136 | 0.81 | 0.7934 | 0.5965 | 0.0975 |
| 0.0348 | 68.0 | 1700 | 0.0379 | 0.81 | 0.6935 | 1.4137 | 0.81 | 0.7934 | 0.5994 | 0.0971 |
| 0.0348 | 69.0 | 1725 | 0.0378 | 0.805 | 0.6935 | 1.4196 | 0.805 | 0.7893 | 0.5913 | 0.0946 |
| 0.0348 | 70.0 | 1750 | 0.0379 | 0.805 | 0.6933 | 1.4129 | 0.805 | 0.7893 | 0.5877 | 0.0945 |
| 0.0348 | 71.0 | 1775 | 0.0379 | 0.805 | 0.6933 | 1.4172 | 0.805 | 0.7893 | 0.5921 | 0.0951 |
| 0.0348 | 72.0 | 1800 | 0.0379 | 0.805 | 0.6931 | 1.4136 | 0.805 | 0.7893 | 0.5851 | 0.0953 |
| 0.0348 | 73.0 | 1825 | 0.0379 | 0.805 | 0.6929 | 1.4168 | 0.805 | 0.7893 | 0.5846 | 0.0971 |
| 0.0348 | 74.0 | 1850 | 0.0379 | 0.805 | 0.6939 | 1.4185 | 0.805 | 0.7893 | 0.5892 | 0.0950 |
| 0.0348 | 75.0 | 1875 | 0.0379 | 0.805 | 0.6935 | 1.4171 | 0.805 | 0.7893 | 0.5946 | 0.0938 |
| 0.0348 | 76.0 | 1900 | 0.0379 | 0.805 | 0.6934 | 1.4217 | 0.805 | 0.7893 | 0.5939 | 0.0959 |
| 0.0348 | 77.0 | 1925 | 0.0379 | 0.8 | 0.6932 | 1.4162 | 0.8000 | 0.7859 | 0.5826 | 0.0954 |
| 0.0348 | 78.0 | 1950 | 0.0379 | 0.8 | 0.6935 | 1.4172 | 0.8000 | 0.7859 | 0.5912 | 0.0950 |
| 0.0348 | 79.0 | 1975 | 0.0379 | 0.8 | 0.6933 | 1.4169 | 0.8000 | 0.7859 | 0.5885 | 0.0964 |
| 0.0348 | 80.0 | 2000 | 0.0379 | 0.8 | 0.6935 | 1.4196 | 0.8000 | 0.7859 | 0.5865 | 0.0957 |
| 0.0348 | 81.0 | 2025 | 0.0379 | 0.8 | 0.6937 | 1.4213 | 0.8000 | 0.7859 | 0.5880 | 0.0962 |
| 0.0348 | 82.0 | 2050 | 0.0379 | 0.8 | 0.6939 | 1.4201 | 0.8000 | 0.7859 | 0.5910 | 0.0962 |
| 0.0348 | 83.0 | 2075 | 0.0379 | 0.8 | 0.6938 | 1.3762 | 0.8000 | 0.7859 | 0.5883 | 0.0945 |
| 0.0348 | 84.0 | 2100 | 0.0379 | 0.8 | 0.6938 | 1.4218 | 0.8000 | 0.7859 | 0.5947 | 0.0950 |
| 0.0348 | 85.0 | 2125 | 0.0379 | 0.8 | 0.6935 | 1.3657 | 0.8000 | 0.7859 | 0.5857 | 0.0912 |
| 0.0348 | 86.0 | 2150 | 0.0379 | 0.8 | 0.6938 | 1.3278 | 0.8000 | 0.7859 | 0.5892 | 0.0929 |
| 0.0348 | 87.0 | 2175 | 0.0379 | 0.8 | 0.6938 | 1.3831 | 0.8000 | 0.7859 | 0.5856 | 0.0946 |
| 0.0348 | 88.0 | 2200 | 0.0379 | 0.8 | 0.6938 | 1.3761 | 0.8000 | 0.7859 | 0.5892 | 0.0955 |
| 0.0348 | 89.0 | 2225 | 0.0379 | 0.8 | 0.6938 | 1.3296 | 0.8000 | 0.7859 | 0.5870 | 0.0947 |
| 0.0348 | 90.0 | 2250 | 0.0379 | 0.8 | 0.6939 | 1.3667 | 0.8000 | 0.7859 | 0.5909 | 0.0926 |
| 0.0348 | 91.0 | 2275 | 0.0379 | 0.8 | 0.6940 | 1.3346 | 0.8000 | 0.7859 | 0.5906 | 0.0930 |
| 0.0348 | 92.0 | 2300 | 0.0379 | 0.8 | 0.6938 | 1.3268 | 0.8000 | 0.7859 | 0.5870 | 0.0936 |
| 0.0348 | 93.0 | 2325 | 0.0379 | 0.8 | 0.6937 | 1.3320 | 0.8000 | 0.7859 | 0.5919 | 0.0939 |
| 0.0348 | 94.0 | 2350 | 0.0379 | 0.8 | 0.6939 | 1.3324 | 0.8000 | 0.7859 | 0.5870 | 0.0928 |
| 0.0348 | 95.0 | 2375 | 0.0379 | 0.8 | 0.6937 | 1.3289 | 0.8000 | 0.7859 | 0.5869 | 0.0932 |
| 0.0348 | 96.0 | 2400 | 0.0379 | 0.8 | 0.6938 | 1.3264 | 0.8000 | 0.7859 | 0.5869 | 0.0931 |
| 0.0348 | 97.0 | 2425 | 0.0379 | 0.8 | 0.6938 | 1.3280 | 0.8000 | 0.7859 | 0.5870 | 0.0932 |
| 0.0348 | 98.0 | 2450 | 0.0379 | 0.8 | 0.6938 | 1.3297 | 0.8000 | 0.7859 | 0.5869 | 0.0930 |
| 0.0348 | 99.0 | 2475 | 0.0379 | 0.8 | 0.6938 | 1.3304 | 0.8000 | 0.7859 | 0.5869 | 0.0929 |
| 0.0347 | 100.0 | 2500 | 0.0379 | 0.8 | 0.6938 | 1.3290 | 0.8000 | 0.7859 | 0.5869 | 0.0931 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
davidmunechika/coreml-genshin-landscape-diffusion | davidmunechika | 2023-07-11T21:01:14Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-06-30T17:28:52Z | ---
license: creativeml-openrail-m
---
|
carova/crazytaxi | carova | 2023-07-11T21:00:55Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T21:00:53Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: crazytaxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="carova/crazytaxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Finnfalter/ppo-LunarLander-v2 | Finnfalter | 2023-07-11T20:46:31Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T20:46:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.83 +/- 16.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ALM-AHME/beit-large-patch16-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30 | ALM-AHME | 2023-07-11T20:46:08Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-11T12:39:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-large-patch16-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 0.9805094130675526
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-finetuned-LungCancer-Classification-LC25000-AH-40-30-30
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0474
- Accuracy: 0.9805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2312 | 0.99 | 93 | 0.1822 | 0.9453 |
| 0.3817 | 1.99 | 187 | 0.2106 | 0.9183 |
| 0.2217 | 3.0 | 281 | 0.1902 | 0.9285 |
| 0.1667 | 4.0 | 375 | 0.1127 | 0.9584 |
| 0.0572 | 4.96 | 465 | 0.0474 | 0.9805 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sl8425/troubleshooting_steps_mobility | sl8425 | 2023-07-11T20:29:16Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-11T18:10:40Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: sl8425/troubleshooting_steps_mobility
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sl8425/troubleshooting_steps_mobility
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4711
- Validation Loss: 0.5176
- Train Accuracy: 0.8332
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 537, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.2549 | 0.7250 | 0.7922 | 0 |
| 0.6109 | 0.5607 | 0.8284 | 1 |
| 0.4711 | 0.5176 | 0.8332 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Chickenfish/Daytechillvae | Chickenfish | 2023-07-11T20:27:55Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-11T20:26:26Z | ---
license: creativeml-openrail-m
---
|
daesok/distilbert-base-uncased-finetuned-emotion | daesok | 2023-07-11T20:19:00Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-11T15:56:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9254716845551784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Accuracy: 0.925
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8137 | 1.0 | 250 | 0.3045 | 0.9045 | 0.9024 |
| 0.2458 | 2.0 | 500 | 0.2141 | 0.925 | 0.9255 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.9.0+cu111
- Datasets 2.12.0
- Tokenizers 0.11.0
|
torresflo/Poke-Model | torresflo | 2023-07-11T20:15:41Z | 240 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"vision",
"Pokémon",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-01-27T23:58:24Z | ---
license: gpl-3.0
tags:
- vision
- image-classification
- Pokémon
widget:
- src: https://huggingface.co/torresflo/Poke-Model/resolve/main/examples/1.jpg
example_title: Bulbasaur
- src: https://huggingface.co/torresflo/Poke-Model/resolve/main/examples/2.jpg
example_title: Charizard
- src: https://huggingface.co/torresflo/Poke-Model/resolve/main/examples/3.jpg
example_title: Blastoise
---
# Poké Model
Poké Model is a Pokémon Classifier created to be used with [Pokédex AI](https://github.com/torresflo/Pokedex-AI). It is a fine-tuned model of google/vit-base-patch16-224 to classify Pokémon of the first generation.
More information on how to generate and how to use the model can be found on this [dedicated repository](https://github.com/torresflo/Poke-Model).
## License
Distributed under the GNU General Public License v3.0. See [here](https://www.gnu.org/licenses/gpl-3.0.en.html) for more information.
|
autopilot-ai/EthicalEye | autopilot-ai | 2023-07-11T20:11:30Z | 269 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"en",
"fr",
"hi",
"gu",
"bn",
"ml",
"mr",
"pa",
"it",
"es",
"kn",
"as",
"af",
"ru",
"ro",
"sq",
"ar",
"am",
"az",
"bs",
"bh",
"bg",
"bo",
"ca",
"ce",
"zh",
"cr",
"hr",
"cs",
"da",
"de",
"nl",
"el",
"et",
"eo",
"fi",
"fj",
"fa",
"gl",
"ga",
"ha",
"ht",
"he",
"hu",
"hy",
"id",
"is",
"ja",
"jv",
"ka",
"kk",
"km",
"ko",
"ks",
"ku",
"ky",
"la",
"lb",
"lt",
"lv",
"mk",
"mn",
"ms",
"mi",
"mt",
"ne",
"no",
"or",
"om",
"ps",
"pl",
"pt",
"qu",
"sa",
"sm",
"gd",
"sr",
"sn",
"sd",
"si",
"sk",
"sl",
"so",
"su",
"sw",
"sv",
"tg",
"ta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-01T10:36:08Z | ---
license: apache-2.0
requirements:
- sentencepiece: >-
(if not installed install using `pip install sentencepiece`, and restart
runtime)
library_name: transformers
pipeline_tag: text-classification
language:
- en
- fr
- hi
- gu
- bn
- ml
- mr
- pa
- it
- es
- kn
- as
- af
- ru
- ro
- sq
- ar
- am
- az
- bs
- bh
- bg
- bo
- ca
- ce
- zh
- cr
- hr
- cs
- da
- de
- nl
- el
- et
- eo
- fi
- fj
- fa
- gl
- ga
- ha
- ht
- he
- hu
- hy
- id
- is
- ja
- jv
- ka
- kk
- km
- ko
- ks
- ku
- ky
- la
- lb
- lt
- lv
- mk
- mn
- ms
- mi
- mt
- ne
- 'no'
- or
- om
- ps
- pl
- pt
- qu
- sa
- sm
- gd
- sr
- sn
- sd
- si
- sk
- sl
- so
- su
- sw
- sv
- tg
- ta
---
## Details
- Model Name: Ethical Eye
- Description: Ethical Eye is an open-source AI model developed by AutopilotAI. It is designed to flag and analyze user-generated content for harmful or unethical behavior, providing a last layer of decision-making to assist AI systems in promoting ethical and moral actions. The model leverages advanced techniques such as text classification, toxicity analysis, and cross-lingual NLP to detect abuse, obscene language, and harmful or unethical comments in multiple languages.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("autopilot-ai/EthicalEye")
model = AutoModelForSequenceClassification.from_pretrained("autopilot-ai/EthicalEye")
```
## Intended Use
- Primary Use Case: The Ethical Eye model is primarily intended to be used as a tool to flag or block users exhibiting harmful or unethical behavior on various platforms. It aims to assist developers, especially those with limited experience in NLP, in enforcing ethical standards and creating a safer environment for users.
- User Expertise: The model is designed to be accessible to developers with various levels of NLP expertise, including those with limited experience in the field.
- Limitations: While Ethical Eye provides valuable insights and analysis, it is essential to note that it should be used as an aid and not as the sole determinant of ethical decision-making. It may have limitations in understanding context-specific nuances and may require continuous improvement and customization for specific domains or languages.
## Model Details
- Architecture: Ethical Eye is built using PyTorch and utilizes the Transformers library. It employs the XLM-Roberta architecture, which enables cross-lingual understanding and transfer learning.
- Developed by: [Khush Patel](https://www.linkedin.com/in/khush-patel-kp/), [Jayveersinh Raj](https://www.linkedin.com/in/jayveersinh-raj-67694222a/)
- License: The Ethical Eye model is released under the Apache 2.0 license, granting users the freedom to use, modify, and distribute the model according to the terms of the license.
## Use Cases
- Content Moderation: Ethical Eye can be integrated into content moderation systems to automatically flag and block user-generated content that contains abusive language, hate speech, or other forms of harmful or unethical behavior. It helps platforms maintain a safe and respectful environment for their users.
- Social Media Platforms: Social media platforms can utilize Ethical Eye to automatically detect and filter out toxic comments, obscenities, and offensive content in multiple languages. This helps to create a more positive and inclusive online community.
- Chatbots and Virtual Assistants: By incorporating Ethical Eye into chatbots and virtual assistants, AI systems can ensure that their responses align with ethical guidelines. It helps prevent AI agents from engaging in inappropriate or offensive conversations with users.
- Online Forums and Discussion Boards: Ethical Eye can be applied to online forums and discussion boards to monitor user interactions and identify potential instances of harassment, bullying, or unethical behavior. This allows moderators to take appropriate actions to maintain a healthy and respectful environment.
- E-commerce Platforms: E-commerce platforms can utilize Ethical Eye to automatically identify and block reviews or comments that contain false information, spam, or unethical practices. This helps maintain the integrity of the platform and ensures honest and reliable user feedback.
- Educational Platforms: Ethical Eye can be used in educational platforms to flag and address instances of cyberbullying, inappropriate language, or offensive content in student discussions and comments. It supports the creation of a safe and respectful learning environment.
- AI Reinforcement Learning: The Ethical Eye model can serve as a critic in reinforcement learning scenarios, providing feedback on the ethical implications of actions taken by AI agents. It assists in developing AI systems that not only optimize for task performance but also align with ethical guidelines and societal norms.
## Considerations for Deployment
- Hardware Requirements: The Ethical Eye model can be deployed on hardware configurations suitable for running deep learning models. Specific requirements may depend on the scale of deployment and the desired performance.
- Dependencies: The model relies on PyTorch, Transformers, and XLM-Roberta libraries. Refer to the model documentation for specific version requirements.
- Integration: Ethical Eye can be integrated into existing AI systems and platforms using the provided APIs and guidelines. Additional customization may be necessary to adapt the model to specific requirements.
- Ethical and Legal Considerations: While Ethical Eye aims to promote ethical behavior, it is important to acknowledge that it may have limitations and biases inherent in its training data. Developers should exercise caution and consider the legal and ethical implications of relying solely on the model's outputs without human oversight. |
carova/ppo-Huggy | carova | 2023-07-11T20:06:31Z | 27 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-11T19:17:52Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: carova/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BlueAvenir/model_it_recruit_V_0_1 | BlueAvenir | 2023-07-11T20:00:17Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-07-11T19:59:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 100 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
BlueAvenir/model_operations_V_0_2 | BlueAvenir | 2023-07-11T19:33:56Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-07-11T19:33:15Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 100 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
niquito/falcon-7b-instruct-ft-adapters | niquito | 2023-07-11T18:47:44Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-11T18:47:43Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Winmodel/a2c-PandaReachDense-v2 | Winmodel | 2023-07-11T18:38:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T18:37:34Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.49 +/- 0.17
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gerulata/slovakbert | gerulata | 2023-07-11T18:36:33Z | 4,830 | 19 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"fill-mask",
"SlovakBERT",
"sk",
"dataset:wikipedia",
"dataset:opensubtitles",
"dataset:oscar",
"dataset:gerulatawebcrawl",
"dataset:gerulatamonitoring",
"dataset:blbec.online",
"arxiv:2109.15254",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: sk
tags:
- SlovakBERT
license: mit
datasets:
- wikipedia
- opensubtitles
- oscar
- gerulatawebcrawl
- gerulatamonitoring
- blbec.online
---
# SlovakBERT (base-sized model)
SlovakBERT pretrained model on Slovak language using a masked language modeling (MLM) objective. This model is case-sensitive: it makes a difference between slovensko and Slovensko.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
**IMPORTANT**: The model was not trained on the “ and ” (direct quote) character -> so before tokenizing the text, it is advised to replace all “ and ” (direct quote marks) with a single "(double quote marks).
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='gerulata/slovakbert')
unmasker("Deti sa <mask> na ihrisku.")
[{'sequence': 'Deti sa hrali na ihrisku.',
'score': 0.6355380415916443,
'token': 5949,
'token_str': ' hrali'},
{'sequence': 'Deti sa hrajú na ihrisku.',
'score': 0.14731724560260773,
'token': 9081,
'token_str': ' hrajú'},
{'sequence': 'Deti sa zahrali na ihrisku.',
'score': 0.05016357824206352,
'token': 32553,
'token_str': ' zahrali'},
{'sequence': 'Deti sa stretli na ihrisku.',
'score': 0.041727423667907715,
'token': 5964,
'token_str': ' stretli'},
{'sequence': 'Deti sa učia na ihrisku.',
'score': 0.01886524073779583,
'token': 18099,
'token_str': ' učia'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert')
model = RobertaModel.from_pretrained('gerulata/slovakbert')
text = "Text ktorý sa má embedovať."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert')
model = TFRobertaModel.from_pretrained('gerulata/slovakbert')
text = "Text ktorý sa má embedovať."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
Or extract information from the model like this:
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='gerulata/slovakbert')
unmasker("Slovenské národne povstanie sa uskutočnilo v roku <mask>.")
[{'sequence': 'Slovenske narodne povstanie sa uskutočnilo v roku 1944.',
'score': 0.7383289933204651,
'token': 16621,
'token_str': ' 1944'},...]
```
# Training data
The SlovakBERT model was pretrained on these datasets:
- Wikipedia (326MB of text),
- OpenSubtitles (415MB of text),
- Oscar (4.6GB of text),
- Gerulata WebCrawl (12.7GB of text) ,
- Gerulata Monitoring (214 MB of text),
- blbec.online (4.5GB of text)
The text was then processed with the following steps:
- URL and email addresses were replaced with special tokens ("url", "email").
- Elongated interpunction was reduced (e.g. -- to -).
- Markdown syntax was deleted.
- All text content in braces f.g was eliminated to reduce the amount of markup and programming language text.
We segmented the resulting corpus into sentences and removed duplicates to get 181.6M unique sentences. In total, the final corpus has 19.35GB of text.
# Pretraining
The model was trained in **fairseq** on 4 x Nvidia A100 GPUs for 300K steps with a batch size of 512 and a sequence length of 512. The optimizer used is Adam with a learning rate of 5e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), a weight decay of 0.01, dropout rate 0.1, learning rate warmup for 10k steps and linear decay of the learning rate after. We used 16-bit float precision.
## About us
<a href="https://www.gerulata.com/">
<img width="300px" src="https://www.gerulata.com/assets/images/Logo_Blue.svg">
</a>
Gerulata Technologies is a tech company on a mission to provide tools for fighting disinformation and hostile propaganda.
At Gerulata, we focus on providing state-of-the-art AI-powered tools that empower human analysts and provide them with the ability to make informed decisions.
Our tools allow for the monitoring and analysis of online activity, as well as the detection and tracking of disinformation and hostile propaganda campaigns. With our products, our clients are better equipped to identify and respond to threats in real-time.
### BibTeX entry and citation info
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2109.15254
```
@misc{pikuliak2021slovakbert,
title={SlovakBERT: Slovak Masked Language Model},
author={Matúš Pikuliak and Štefan Grivalský and Martin Konôpka and Miroslav Blšták and Martin Tamajka and Viktor Bachratý and Marián Šimko and Pavol Balážik and Michal Trnka and Filip Uhlárik},
year={2021},
eprint={2109.15254},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
grace-pro/xlmr-base-finetuned-hausa | grace-pro | 2023-07-11T18:35:51Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-11T17:10:41Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlmr-base-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-base-finetuned-hausa
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1457
- Precision: 0.7098
- Recall: 0.5546
- F1: 0.6227
- Accuracy: 0.9581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1659 | 1.0 | 2624 | 0.1544 | 0.6311 | 0.4655 | 0.5358 | 0.9480 |
| 0.1403 | 2.0 | 5248 | 0.1402 | 0.6728 | 0.5248 | 0.5896 | 0.9534 |
| 0.1145 | 3.0 | 7872 | 0.1429 | 0.7280 | 0.5130 | 0.6018 | 0.9570 |
| 0.1017 | 4.0 | 10496 | 0.1413 | 0.6952 | 0.5543 | 0.6168 | 0.9569 |
| 0.0862 | 5.0 | 13120 | 0.1457 | 0.7098 | 0.5546 | 0.6227 | 0.9581 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
earentilt/taxi-driver | earentilt | 2023-07-11T18:17:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T18:17:49Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-driver
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="earentilt/taxi-driver", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Winmodel/a2c-AntBulletEnv-v0 | Winmodel | 2023-07-11T18:15:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T17:15:22Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 863.15 +/- 36.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RogerB/afro-xlmr-large-finetuned-kintweets | RogerB | 2023-07-11T18:13:40Z | 98 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-11T18:07:30Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-finetuned-kintweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-finetuned-kintweets
This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9995 | 1.0 | 90 | 1.6774 |
| 1.9176 | 2.0 | 180 | 1.6880 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Belphegor/q-FrozenLake-v1-4x4-noSlippery | Belphegor | 2023-07-11T18:08:55Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T18:08:53Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Belphegor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
markjosims/wav2vec2-large-xls-r-300m-tr-colab | markjosims | 2023-07-11T18:00:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-07-11T00:17:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-tr-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.37473189663977124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4346
- Wer: 0.3747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9005 | 4.26 | 400 | 0.6917 | 0.7251 |
| 0.4032 | 8.51 | 800 | 0.4781 | 0.5286 |
| 0.1863 | 12.77 | 1200 | 0.4682 | 0.4690 |
| 0.1323 | 17.02 | 1600 | 0.4664 | 0.4483 |
| 0.1014 | 21.28 | 2000 | 0.4500 | 0.4124 |
| 0.0749 | 25.53 | 2400 | 0.4510 | 0.3909 |
| 0.0568 | 29.79 | 2800 | 0.4346 | 0.3747 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
earentilt/q-FrozenLake-v1-4x4-noSlippery | earentilt | 2023-07-11T17:55:19Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T17:52:35Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="earentilt/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Fixedbot/ppo-LunarLander-v2 | Fixedbot | 2023-07-11T17:54:05Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T17:46:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.11 +/- 54.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RogerB/KinyaBERT-large-finetuned-kintweets | RogerB | 2023-07-11T17:53:41Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-11T17:52:38Z | ---
tags:
- generated_from_trainer
model-index:
- name: KinyaBERT-large-finetuned-kintweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KinyaBERT-large-finetuned-kintweets
This model is a fine-tuned version of [jean-paul/KinyaBERT-large](https://huggingface.co/jean-paul/KinyaBERT-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.444 | 1.0 | 90 | 4.2183 |
| 4.1477 | 2.0 | 180 | 4.1509 |
| 4.0191 | 3.0 | 270 | 4.1733 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
swacks/q-FrozenLake-v1-4x4-noSlippery | swacks | 2023-07-11T17:39:19Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T17:39:16Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="swacks/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ashnrk/style_textual_inversion_sat | ashnrk | 2023-07-11T17:34:14Z | 16 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-11T16:27:45Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/style_textual_inversion_sat
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
chandan9t8/dqn-SpaceInvaders | chandan9t8 | 2023-07-11T17:32:33Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T17:31:54Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 701.50 +/- 335.42
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chandan9t8 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chandan9t8 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga chandan9t8
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Luke537/videomae-base-finetuned-ucf101-subset | Luke537 | 2023-07-11T17:30:17Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2023-07-11T14:12:56Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 74
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MaitreHibou/a2c-AntBulletEnv-v0 | MaitreHibou | 2023-07-11T17:25:59Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T17:24:54Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 732.68 +/- 43.01
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/sa_bert_12_layer_modified_complete_training_72 | gokuls | 2023-07-11T17:19:06Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-10T16:40:56Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa_bert_12_layer_modified_complete_training_72
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_bert_12_layer_modified_complete_training_72
This model is a fine-tuned version of [gokuls/sa_bert_12_layer_modified_complete_training_48](https://huggingface.co/gokuls/sa_bert_12_layer_modified_complete_training_48) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6236
- Accuracy: 0.5322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0311 | 0.05 | 10000 | 2.8263 | 0.5069 |
| 2.8816 | 0.11 | 20000 | 2.7833 | 0.5126 |
| 2.7734 | 0.16 | 30000 | 2.7565 | 0.5158 |
| 2.7612 | 0.22 | 40000 | 2.7284 | 0.5196 |
| 2.8843 | 0.27 | 50000 | 2.7006 | 0.5229 |
| 2.7809 | 0.33 | 60000 | 2.6765 | 0.5254 |
| 2.6683 | 0.38 | 70000 | 2.6580 | 0.5276 |
| 2.7175 | 0.44 | 80000 | 2.6270 | 0.5316 |
| 2.8903 | 0.49 | 90000 | 2.6236 | 0.5322 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
muhtasham/TajGPT | muhtasham | 2023-07-11T17:13:11Z | 159 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-09T19:00:00Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-tajik
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-tajik
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1107 | 1.0 | 2405 | 6.9547 |
| 6.7012 | 2.0 | 4810 | 6.6086 |
| 6.5467 | 3.0 | 7215 | 6.5076 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mark-oppenheim/q-FrozenLake-v1-4x4-Slippery | mark-oppenheim | 2023-07-11T16:59:57Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-10T20:08:34Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.72 +/- 0.45
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mark-oppenheim/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
luzflavio/distilbert-base-uncased-finetuned-cola | luzflavio | 2023-07-11T16:50:38Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-11T16:45:34Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: luzflavio/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# luzflavio/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1975
- Validation Loss: 0.5266
- Train Matthews Correlation: 0.5279
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5218 | 0.4601 | 0.4776 | 0 |
| 0.3330 | 0.4767 | 0.5113 | 1 |
| 0.1975 | 0.5266 | 0.5279 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
d9021001/mms-1b-l1107-nan | d9021001 | 2023-07-11T16:49:35Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-07-11T15:49:29Z | ---
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: mms-1b-l1107-nan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: nan-tw
split: test
args: nan-tw
metrics:
- name: Wer
type: wer
value: 1.005720823798627
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-l1107-nan
This model was trained from scratch on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5084
- Wer: 1.0057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.5725 | 2.0 | 100 | 1.8002 | 1.0 |
| 1.5002 | 4.0 | 200 | 1.5084 | 1.0057 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V11.2 | SHENMU007 | 2023-07-11T16:45:51Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-07-11T14:03:10Z | ---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vluz/MiniAsirraONNX | vluz | 2023-07-11T16:42:29Z | 0 | 0 | null | [
"onnx",
"license:cc0-1.0",
"region:us"
]
| null | 2023-07-11T16:38:04Z | ---
license: cc0-1.0
---
Very small onnx model, trained on Asirra 150 dataset, and intended as an example of Lobe beta
It classifies input images as "Cat" or "Dog"
Untested, do not use for production |
MaitreHibou/ppo-Pyramids | MaitreHibou | 2023-07-11T16:28:23Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-07-11T16:28:19Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MaitreHibou/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Desainakut/YPKuatsi | Desainakut | 2023-07-11T16:09:02Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-11T06:33:39Z | ---
license: creativeml-openrail-m
---
|
1aurent/ppo-PyramidsRND | 1aurent | 2023-07-11T16:08:35Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-07-11T16:07:10Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 1aurent/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SANTIAGo2005/ppo-Huggy | SANTIAGo2005 | 2023-07-11T16:07:41Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-11T16:07:36Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SANTIAGo2005/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
oliverwang15/FinGPT_ChatGLM2_Sentiment_Instruction_LoRA_FT | oliverwang15 | 2023-07-11T16:06:12Z | 0 | 28 | null | [
"ChatGLM2",
"LoRA",
"en",
"dataset:oliverwang15/fingpt_chatglm2_sentiment_instruction_lora_ft_dataset",
"license:mit",
"region:us"
]
| null | 2023-07-10T18:09:35Z | ---
license: mit
datasets:
- oliverwang15/fingpt_chatglm2_sentiment_instruction_lora_ft_dataset
language:
- en
metrics:
- accuracy
- f1
tags:
- ChatGLM2
- LoRA
---
## [FinGPT_ChatGLM2_Sentiment_Instruction_LoRA_FT(FinGPT v3)](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT-v3) is a LLM finetuned with LoRA method on the News and Tweets sentiment analysis dataset which achieve best scores on most of the financial sentiment analysis datasets.
## Ⅰ. Try our model
``` python
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel
# Load Models
base_model = "THUDM/chatglm2-6b"
peft_model = "oliverwang15/FinGPT_ChatGLM2_Sentiment_Instruction_LoRA_FT"
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
model = AutoModel.from_pretrained(base_model, trust_remote_code=True, device_map = "auto")
model = PeftModel.from_pretrained(model, peft_model)
# Make prompts
prompt = [
'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
Input: FINANCING OF ASPOCOMP 'S GROWTH Aspocomp is aggressively pursuing its growth strategy by increasingly focusing on technologically more demanding HDI printed circuit boards PCBs .
Answer: ''',
'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
Input: According to Gran , the company has no plans to move all production to Russia , although that is where the company is growing .
Answer: ''',
'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
Input: A tinyurl link takes users to a scamming site promising that users can earn thousands of dollars by becoming a Google ( NASDAQ : GOOG ) Cash advertiser .
Answer: ''',
]
# Generate results
tokens = tokenizer(prompt, return_tensors='pt', padding=True, max_length=512)
res = model.generate(**tokens, max_length=512)
res_sentences = [tokenizer.decode(i) for i in res]
out_text = [o.split("Answer: ")[1] for o in res_sentences]
# show results
for sentiment in out_text:
print(sentiment)
# Output:
# positive
# neutral
# negative
```
## Ⅱ. Benchmark Results
| ACC/F1 Micro | BloombergGPT | ChatGLM2 | ChatGLM2 (8-bit*) | FinGPT v3 | FinGPT v3 (8-bit*) |
| ---------------------- | ------------ | -------- | ---------------- | --------- | ----------------- |
| FPB [1] | - | 0.464 | 0.476 | **0.8** | 0.784 |
| FiQA-SA [2] | - | 0.822 | **0.833** | 0.815 | 0.818 |
| TFNS [3] | - | 0.331 | 0.332 | **0.738** | 0.721 |
| NWGI [4] | - | 0.560 | 0.561 | **0.588** | **0.588** |
| **Macro F1** | | | | | |
| FPB [1] | - | 0.487 | 0.5 | **0.774** | 0.754 |
| FiQA-SA [2] | - | 0.56 | 0.57 | **0.665** | 0.645 |
| TFNS [3] | - | 0.34 | 0.34 | **0.681** | 0.652 |
| NWGI [4] | - | 0.489 | 0.492 | **0.579** | 0.576 |
| **Weighted F1** | | | | | |
| FPB [1] | 0.511 | 0.381 | 0.398 | **0.795** | 0.778 |
| FiQA-SA [2] | 0.751 | 0.79 | 0.801 | **0.806** | 0.801 |
| TFNS [3] | - | 0.189 | 0.19 | **0.74** | 0.721 |
| NWGI [4] | - | 0.449 | 0.452 | **0.578** | **0.578** |
* '8-bit' doesn't refer to finetuning in 8-bit, but refers to loading the trained model and inferencing in 8-bit mode.
[[1] Financial_Phrasebank (FPB) ](https://huggingface.co/datasets/financial_phrasebank) is a financial news sentiment analysis benchmark, the labels are "positive", "negative" and "neutral". We use the same split as BloombergGPT. BloombergGPT only use 5-shots in the test to show their model's outstanding performance without further finetuning. However, is our task, all data in the 'train' part were used in finetuning, So our results are far better than Bloomberg's.
[[2] FiQA SA](https://huggingface.co/datasets/pauri32/fiqa-2018) consists of 17k sentences from microblog headlines and financial news. These labels were changed to "positive", "negative" and "neutral" according to BloombergGPT's paper. We have tried to use the same split as BloombergGPT's paper. However, the amounts of each label can't match exactly when the seed was set to 42.
[[3] Twitter Financial News Sentiment (TFNS)](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment) dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment. The dataset holds 11,932 documents annotated with 3 labels: "Bearish" ("negative"), "Bullish" ("positive"), and "Neutral".
[[4] News With GPT Instruction (MWGI)](https://huggingface.co/datasets/oliverwang15/news_with_gpt_instructions) is a dataset whose labels were generated by ChatGPT. The train set has 16.2k samples and the test set has 4.05k samples. The dataset not only contains 7 classification labels: "strong negative", "moderately negative", "mildly negative", "neutral", "mildly positive", "moderately positive", "strong positive". but it also has the reasons for that result, which might be helpful in the instruction finetuning.
## Ⅲ. How to Train
Coming Soon. |
PisoF/ppo-Huggy | PisoF | 2023-07-11T16:05:20Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-11T16:05:10Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: PisoF/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HiHowareyou2353/ppo-Huggy | HiHowareyou2353 | 2023-07-11T16:04:11Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-11T16:03:01Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HiHowareyou2353/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hgc34/ppo-Huggy | hgc34 | 2023-07-11T16:03:04Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-11T16:02:53Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hgc34/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MUNDOJU/ppo-Huggy | MUNDOJU | 2023-07-11T16:02:28Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-11T16:02:25Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MUNDOJU/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Samuel1234/ppo-Huggy | Samuel1234 | 2023-07-11T15:58:02Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-11T15:57:59Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Samuel1234/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aceituna0813/ppo-Huggy | Aceituna0813 | 2023-07-11T15:57:29Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-11T15:57:16Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Aceituna0813/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vnktrmnb/bert-base-multilingual-cased-finetuned-SQuAD2_SM_Te | vnktrmnb | 2023-07-11T15:53:37Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-07-11T14:36:57Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/bert-base-multilingual-cased-finetuned-SQuAD2_SM_Te
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/bert-base-multilingual-cased-finetuned-SQuAD2_SM_Te
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4740
- Train End Logits Accuracy: 0.5953
- Train Start Logits Accuracy: 0.6214
- Validation Loss: 1.4757
- Validation End Logits Accuracy: 0.5947
- Validation Start Logits Accuracy: 0.6291
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3661, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4740 | 0.5953 | 0.6214 | 1.4757 | 0.5947 | 0.6291 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kfahn/whisper-tiny-minds14-v1 | kfahn | 2023-07-11T15:49:44Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-07-11T13:43:35Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: Whisper-tiny-minds14-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Poly/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.32605820105820105
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ./whisper-tiny-minds14-v1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7543
- Wer Ortho: 0.3473
- Wer: 0.3261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0084 | 17.86 | 500 | 0.7543 | 0.3473 | 0.3261 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BlueAvenir/model_operations_V_0_1 | BlueAvenir | 2023-07-11T15:31:41Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-07-11T15:31:13Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 205 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 205,
"warmup_steps": 21,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
grace-pro/afriberta-base-finetuned-igbo | grace-pro | 2023-07-11T15:18:59Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-11T14:32:20Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-base-finetuned-igbo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-base-finetuned-igbo
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2159
- Precision: 0.7242
- Recall: 0.5039
- F1: 0.5943
- Accuracy: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1989 | 1.0 | 2514 | 0.2020 | 0.7134 | 0.4098 | 0.5206 | 0.9285 |
| 0.1759 | 2.0 | 5028 | 0.2125 | 0.7383 | 0.4263 | 0.5405 | 0.9315 |
| 0.1417 | 3.0 | 7542 | 0.2044 | 0.7320 | 0.4736 | 0.5751 | 0.9352 |
| 0.1279 | 4.0 | 10056 | 0.2066 | 0.7341 | 0.4884 | 0.5866 | 0.9363 |
| 0.1132 | 5.0 | 12570 | 0.2159 | 0.7242 | 0.5039 | 0.5943 | 0.9367 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
projecte-aina/distilroberta-base-ca-v2 | projecte-aina | 2023-07-11T15:11:08Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"catalan",
"masked-lm",
"distilroberta",
"fill-mask",
"ca",
"arxiv:1910.01108",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-02T11:39:15Z | ---
language: ca
license: apache-2.0
tags:
- catalan
- masked-lm
- distilroberta
widget:
- text: El Català és una llengua molt <mask>.
- text: Salvador Dalí va viure a <mask>.
- text: La Costa Brava té les millors <mask> d'Espanya.
- text: El cacaolat és un batut de <mask>.
- text: <mask> és la capital de la Garrotxa.
- text: Vaig al <mask> a buscar bolets.
- text: Antoni Gaudí vas ser un <mask> molt important per la ciutat.
- text: Catalunya és una referència en <mask> a nivell europeu.
pipeline_tag: fill-mask
---
# DistilRoBERTa-base-ca-v2
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [CLUB benchmark](#club-benchmark)
- [Evaluation results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
This model is a distilled version of [projecte-aina/roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2). It follows the same training procedure as [DistilBERT](https://arxiv.org/abs/1910.01108), using the implementation of Knowledge Distillation from the paper's [official repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation).
The resulting architecture consists of 6 layers, 768 dimensional embeddings and 12 attention heads. This adds up to a total of 82M parameters, which is considerably less than the 125M of standard RoBERTa-base models. This makes the model lighter and faster than the original, at the cost of slightly lower performance.
We encourage users of this model to check out the [projecte-aina/roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model card to learn more details about the teacher model.
## Intended uses and limitations
This model is ready-to-use only for masked language modeling (MLM) to perform the Fill-Mask task. However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition.
## How to use
Usage example where the model is passed to a fill-mask pipeline to predict the masked word (`<mask>`) from a given text.
```python
from pprint import pprint
from transformers import pipeline
pipe = pipeline("fill-mask", model="projecte-aina/distilroberta-base-ca-v2")
text = "El <mask> és el meu dia preferit de la setmana."
pprint(pipe(text))
```
```
[{'score': 0.2531125545501709,
'sequence': ' El dilluns és el meu dia preferit de la setmana.',
'token': 2885,
'token_str': ' dilluns'},
{'score': 0.13626143336296082,
'sequence': ' El divendres és el meu dia preferit de la setmana.',
'token': 2539,
'token_str': ' divendres'},
{'score': 0.11026635020971298,
'sequence': ' El dijous és el meu dia preferit de la setmana.',
'token': 2868,
'token_str': ' dijous'},
{'score': 0.10040736198425293,
'sequence': ' El dissabte és el meu dia preferit de la setmana.',
'token': 2480,
'token_str': ' dissabte'},
{'score': 0.09762872755527496,
'sequence': ' El diumenge és el meu dia preferit de la setmana.',
'token': 2587,
'token_str': ' diumenge'}]
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
The training corpus consists of several corpora gathered from web crawling and public corpora, as shown in the table below:
| Corpus | Size (GB) |
|--------------------------|------------|
| Catalan Crawling | 13.00 |
| RacoCatalá | 8.10 |
| Catalan Oscar | 4.00 |
| CaWaC | 3.60 |
| Cat. General Crawling | 2.50 |
| Wikipedia | 1.10 |
| DOGC | 0.78 |
| Padicat | 0.63 |
| ACN | 0.42 |
| Nació Digital | 0.42 |
| Cat. Government Crawling | 0.24 |
| Vilaweb | 0.06 |
| Catalan Open Subtitles | 0.02 |
| Tweets | 0.02 |
### Training procedure
This model has been trained using a technique known as Knowledge Distillation, which is used to shrink networks to a reasonable size while minimizing the loss in performance.
It basically consists in distilling a large language model (the teacher) into a more lightweight, energy-efficient, and production-friendly model (the student).
So, in a “teacher-student learning” setup, a relatively small student model is trained to mimic the behavior of a larger teacher model. As a result, the student has lower inference time and the ability to run in commodity hardware.
## Evaluation
### CLUB benchmark
This model has been fine-tuned on the downstream tasks of the [Catalan Language Understanding Evaluation benchmark (CLUB)](https://club.aina.bsc.es/), which includes the following datasets:
| Dataset | Task| Total | Train | Dev | Test |
|:----------|:----|:--------|:-------|:------|:------|
| AnCora | NER | 13,581 | 10,628 | 1,427 | 1,526 |
| AnCora | POS | 16,678 | 13,123 | 1,709 | 1,846 |
| STS-ca | STS | 3,073 | 2,073 | 500 | 500 |
| TeCla | TC | 137,775 | 110,203| 13,786| 13,786|
| TE-ca | RTE | 21,163 | 16,930 | 2,116 | 2,117 |
| CatalanQA | QA | 21,427 | 17,135 | 2,157 | 2,135 |
| XQuAD-ca | QA | - | - | - | 1,189 |
### Evaluation results
This is how it compares to its teacher when fine-tuned on the aforementioned downstream tasks:
| Model \ Task |NER (F1)|POS (F1)|STS-ca (Comb.)|TeCla (Acc.)|TEca (Acc.)|CatalanQA (F1/EM)| XQuAD-ca <sup>1</sup> (F1/EM) |
| ------------------------|:-------|:-------|:-------------|:-----------|:----------|:----------------|:------------------------------|
| RoBERTa-base-ca-v2 | **89.29** | **98.96** | **79.07** | **74.26** | **83.14** | **89.50**/**76.63** | **73.64**/**55.42** |
| DistilRoBERTa-base-ca | 87.88 | 98.83 | 77.26 | 73.20 | 76.00 | 84.07/70.77 | 62.93/45.08 |
<sup>1</sup> : Trained on CatalanQA, tested on XQuAD-ca.
## Additional information
### Authors
Language Technologies Unit at Barcelona Supercomputing Center ([[email protected]]([email protected])).
### Contact information
For further information, send an email to [[email protected]]([email protected]).
### Copyright
Copyright by the Language Technologies Unit at Barcelona Supercomputing Center.
### Licensing information
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation information
There is no publication for this specific model, but you can cite the paper where the teacher model was presented:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC) be liable for any results arising from the use made by third parties of these models.
</details> |
banden/Taxi-v1 | banden | 2023-07-11T15:07:26Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T15:07:24Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.65
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="banden/Taxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
banden/q-FrozenLake-v1-4x4-noSlippery | banden | 2023-07-11T14:53:26Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T14:47:32Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="banden/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
biodatlab/SciBERT-Neuro-Contrastive | biodatlab | 2023-07-11T14:50:44Z | 15 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-07-11T14:49:45Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15616 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 8,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
sgugger/bert-finetuned-mrpc | sgugger | 2023-07-11T14:47:28Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8602941176470589
- name: F1
type: f1
value: 0.9032258064516129
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: mrpc
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.8602941176470589
verified: true
- name: Precision
type: precision
value: 0.8580645161290322
verified: true
- name: Recall
type: recall
value: 0.953405017921147
verified: true
- name: AUC
type: auc
value: 0.9257731099441527
verified: true
- name: F1
type: f1
value: 0.9032258064516129
verified: true
- name: loss
type: loss
value: 0.5150377154350281
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5152
- Accuracy: 0.8603
- F1: 0.9032
- Combined Score: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| No log | 1.0 | 230 | 0.3668 | 0.8431 | 0.8881 | 0.8656 |
| No log | 2.0 | 460 | 0.3751 | 0.8578 | 0.9017 | 0.8798 |
| 0.4264 | 3.0 | 690 | 0.5152 | 0.8603 | 0.9032 | 0.8818 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.10.3.dev0
- Tokenizers 0.10.3
|
FacebookAI/xlm-mlm-ende-1024 | FacebookAI | 2023-07-11T14:46:38Z | 366 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm",
"fill-mask",
"multilingual",
"en",
"de",
"arxiv:1901.07291",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | ---
language:
- multilingual
- en
- de
license: cc-by-nc-4.0
---
# xlm-mlm-ende-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-German
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-clm-enfr-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the [WMT'16 English-German](https://huggingface.co/datasets/wmt16) dataset using the [BLEU metric](https://huggingface.co/spaces/evaluate-metric/bleu). See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. |
lysandre/ctrl-clone-2 | lysandre | 2023-07-11T14:45:34Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"ctrl",
"text-generation",
"en",
"arxiv:1909.05858",
"arxiv:1910.09700",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-12T15:57:14Z | ---
language: en
license: bsd-3-clause
pipeline_tag: text-generation
---
# ctrl
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The CTRL model was proposed in [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The model developers released a model card for CTRL, available [here](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf).
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> The CTRL Language Model analyzed in this card generates text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1909.05858) from Salesforce Research
- **Model type:** Transformer-based language model
- **Language(s) (NLP):** Primarily English, some German, Spanish, French
- **License:** [BSD 3-Clause](https://github.com/salesforce/ctrl/blob/master/LICENSE.txt); also see [Code of Conduct](https://github.com/salesforce/ctrl)
- **Related Models:** More information needed
- **Parent Model:** More information needed
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1909.05858)
- [GitHub repo](https://github.com/salesforce/ctrl)
- [Developer Model Card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf)
- [Blog post](https://blog.salesforceairesearch.com/introducing-a-conditional-transformer-language-model-for-controllable-generation/)
# Uses
## Direct Use
The model is a language model. The model can be used for text generation.
## Downstream Use
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write that the primary intended users are general audiences and NLP Researchers, and that the primary intended uses are:
> 1. Generating artificial text in collaboration with a human, including but not limited to:
> - Creative writing
> - Automating repetitive writing tasks
> - Formatting specific text types
> - Creating contextualized marketing materials
> 2. Improvement of other NLP applications through fine-tuning (on another task or other data, e.g. fine-tuning CTRL to learn new kinds of language like product descriptions)
> 3. Enhancement in the field of natural language understanding to push towards a better understanding of artificial text generation, including how to detect it and work toward control, understanding, and potentially combating potentially negative consequences of such models.
## Out-of-Scope Use
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> - CTRL should not be used for generating artificial text without collaboration with a human.
> - It should not be used to make normative or prescriptive claims.
> - This software should not be used to promote or profit from:
> - violence, hate, and division;
> - environmental destruction;
> - abuse of human rights; or
> - the destruction of people's physical and mental health.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> We recognize the potential for misuse or abuse, including use by bad actors who could manipulate the system to act maliciously and generate text to influence decision-making in political, economic, and social settings. False attribution could also harm individuals, organizations, or other entities. To address these concerns, the model was evaluated internally as well as externally by third parties, including the Partnership on AI, prior to release.
> To mitigate potential misuse to the extent possible, we stripped out all detectable training data from undesirable sources. We then redteamed the model and found that negative utterances were often placed in contexts that made them identifiable as such. For example, when using the ‘News’ control code, hate speech could be embedded as part of an apology (e.g. “the politician apologized for saying [insert hateful statement]”), implying that this type of speech was negative. By pre-selecting the available control codes (omitting, for example, Instagram and Twitter from the available domains), we are able to limit the potential for misuse.
> In releasing our model, we hope to put it into the hands of researchers and prosocial actors so that they can work to control, understand, and potentially combat the negative consequences of such models. We hope that research into detecting fake news and model-generated content of all kinds will be pushed forward by CTRL. It is our belief that these models should become a common tool so researchers can design methods to guard against malicious use and so the public becomes familiar with their existence and patterns of behavior.
See the [associated paper](https://arxiv.org/pdf/1909.05858.pdf) for further discussions about the ethics of LLMs.
## Recommendations
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> - A recommendation to monitor and detect use will be implemented through the development of a model that will identify CTRLgenerated text.
> - A second recommendation to further screen the input into and output from the model will be implemented through the addition of a check in the CTRL interface to prohibit the insertion into the model of certain negative inputs, which will help control the output that can be generated.
> - The model is trained on a limited number of languages: primarily English and some German, Spanish, French. A recommendation for a future area of research is to train the model on more languages.
See the [CTRL-detector GitHub repo](https://github.com/salesforce/ctrl-detector) for more on the detector model.
# Training
## Training Data
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write:
> This model is trained on 140 GB of text drawn from a variety of domains: Wikipedia (English, German, Spanish, and French), Project Gutenberg, submissions from 45 subreddits, OpenWebText, a large collection of news data, Amazon Reviews, Europarl and UN data from WMT (En-De, En-Es, En-Fr), question-answer pairs (no context documents) from ELI5, and the MRQA shared task, which includes Stanford Question Answering Dataset, NewsQA, TriviaQA, SearchQA, HotpotQA, and Natural Questions. See the paper for the full list of training data.
## Training Procedure
### Preprocessing
In the [associated paper](https://arxiv.org/pdf/1909.05858.pdf) the developers write:
> We learn BPE (Sennrich et al., 2015) codes and tokenize the data using fastBPE4, but we use a large vocabulary of roughly 250K tokens. This includes the sub-word tokens necessary to mitigate problems with rare words, but it also reduces the average number of tokens required to generate long text by including most common words. We use English Wikipedia and a 5% split of our collected OpenWebText data for learning BPE codes. We also introduce an unknown token so that during preprocessing we can filter out sequences that contain more than 2 unknown tokens. This, along with the compressed storage for efficient training (TFRecords) (Abadi et al., 2016), reduces our training data to 140 GB from the total 180 GB collected.
See the paper for links, references, and further details.
### Training
In the [associated paper](https://arxiv.org/pdf/1909.05858.pdf) the developers write:
> CTRL has model dimension d = 1280, inner dimension f = 8192, 48 layers, and 16 heads per layer. Dropout with probability 0.1 follows the residual connections in each layer. Token embeddings were tied with the final output embedding layer (Inan et al., 2016; Press & Wolf, 2016).
See the paper for links, references, and further details.
# Evaluation
## Testing Data, Factors & Metrics
In their [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf), the developers write that model performance measures are:
> Performance evaluated on qualitative judgments by humans as to whether the control codes lead to text generated in the desired domain
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). Details are pulled from the [associated paper](https://arxiv.org/pdf/1909.05858.pdf).
- **Hardware Type:** TPU v3 Pod
- **Hours used:** Approximately 336 hours (2 weeks)
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
In the [associated paper](https://arxiv.org/pdf/1909.05858.pdf) the developers write:
> CTRL was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of 1024 distributed across 256 cores of a Cloud TPU v3 Pod for 800k iterations. Training took approximately 2 weeks using Adagrad (Duchi et al., 2011) with a linear warmup from 0 to 0.05 over 25k steps. The norm of gradients were clipped to 0.25 as in (Merity et al., 2017). Learning rate decay was not necessary due to the monotonic nature of the Adagrad accumulator. We compared to the Adam optimizer (Kingma & Ba, 2014) while training smaller models, but we noticed comparable convergence rates and significant memory savings with Adagrad. We also experimented with explicit memory-saving optimizers including SM3 (Anil et al., 2019), Adafactor (Shazeer & Stern, 2018), and NovoGrad (Ginsburg et al., 2019) with mixed results.
See the paper for links, references, and further details.
# Citation
**BibTeX:**
```bibtex
@article{keskarCTRL2019,
title={{CTRL - A Conditional Transformer Language Model for Controllable Generation}},
author={Keskar, Nitish Shirish and McCann, Bryan and Varshney, Lav and Xiong, Caiming and Socher, Richard},
journal={arXiv preprint arXiv:1909.05858},
year={2019}
}
```
**APA:**
- Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
# Model Card Authors
This model card was written by the team at Hugging Face, referencing the [model card](https://github.com/salesforce/ctrl/blob/master/ModelCard.pdf) released by the developers.
# How to Get Started with the Model
Use the code below to get started with the model. See the [Hugging Face ctrl docs](https://huggingface.co/docs/transformers/model_doc/ctrl) for more information.
<details>
<summary> Click to expand </summary>
```python
>>> from transformers import CTRLTokenizer, CTRLModel
>>> import torch
>>> tokenizer = CTRLTokenizer.from_pretrained("ctrl")
>>> model = CTRLModel.from_pretrained("ctrl")
>>> # CTRL was trained with control codes as the first token
>>> inputs = tokenizer("Opinion My dog is cute", return_tensors="pt")
>>> assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values()
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
```
</details> |
turkish-nlp-suite/tr_vectors_web_lg | turkish-nlp-suite | 2023-07-11T14:42:54Z | 0 | 0 | spacy | [
"spacy",
"floret",
"fasttext",
"feature-extraction",
"token-classification",
"tr",
"arxiv:1910.10683",
"doi:10.57967/hf/0087",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
]
| token-classification | 2022-11-02T17:30:31Z | ---
tags:
- spacy
- floret
- fasttext
- feature-extraction
- token-classification
language:
- tr
license: cc-by-sa-4.0
model-index:
- name: tr_vectors_web_lg
results:
- task:
name: NMT
type: token-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.1112
---
Medium sized Turkish Floret word vectors for spaCy.
The vectors are trained on MC4 corpus using Floret with the following hyperparameters:
```
floret cbow -dim 300 --mode floret --bucket 200000 -minn 4 -maxn5 -minCount 100
-neg 10 -hashCount 2 -thread 12 -epoch 5
```
Vector are published in Floret format.
| Feature | Description |
| --- | --- |
| **Name** | `tr_vectors_web_lg` |
| **Version** | `1.0` |
| **Vectors** | 200000 keys (300 dimensions) |
| **Sources** | [MC4](https://arxiv.org/abs/1910.10683) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [Duygu Altinok](https://www.onlyduygu.com/) |
---
If you'd like to use the vectors in your own work, please kindly cite the paper [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/):
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
|
turkish-nlp-suite/tr_vectors_web_md | turkish-nlp-suite | 2023-07-11T14:42:20Z | 0 | 0 | spacy | [
"spacy",
"floret",
"fasttext",
"feature-extraction",
"token-classification",
"tr",
"arxiv:1910.10683",
"doi:10.57967/hf/0085",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
]
| token-classification | 2022-11-02T17:22:50Z | ---
tags:
- spacy
- floret
- fasttext
- feature-extraction
- token-classification
language:
- tr
license: cc-by-sa-4.0
model-index:
- name: tr_vectors_web_md
results:
- task:
name: NMT
type: token-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.1112
---
Medium sized Turkish Floret word vectors for spaCy.
The vectors are trained on MC4 corpus using Floret with the follwing hyperparameters:
```
floret cbow -dim 300 --mode floret --bucket 50000 -minn 4 -maxn5 -minCount 100
-neg 10 -hashCount 2 -thread 12 -epoch 5
```
Vector are published in Floret format.
| Feature | Description |
| --- | --- |
| **Name** | `tr_vectors_web_md` |
| **Version** | `1.0` |
| **Vectors** | 50000 keys (300 dimensions) |
| **Sources** | [MC4](https://arxiv.org/abs/1910.10683) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [Duygu Altinok](https://www.onlyduygu.com/) |
---
If you'd like to use the vectors in your own work, please kindly cite the paper [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/):
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
|
tyavika/lr1e5-layer2-bs16-Distil-CNN256LSTM128NoBi | tyavika | 2023-07-11T14:42:14Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-07-11T11:07:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: lr1e5-layer2-bs16-Distil-CNN256LSTM128NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr1e5-layer2-bs16-Distil-CNN256LSTM128NoBi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.928 | 1.0 | 3290 | 1.5478 |
| 1.1617 | 2.0 | 6580 | 1.1964 |
| 0.8463 | 3.0 | 9870 | 1.2061 |
| 0.6165 | 4.0 | 13160 | 1.2859 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CaliPanni/natcopeter | CaliPanni | 2023-07-11T14:38:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-07-11T14:02:57Z | NAT CO PETER OFFICIAL MODEL!!!!! (1.0) |
gbellamy/rl_course_vizdoom_health_gathering_supreme | gbellamy | 2023-07-11T14:31:42Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T14:31:32Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.87 +/- 4.95
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r gbellamy/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
edbeeching/atari_2B_atari_surround_1111 | edbeeching | 2023-07-11T14:26:10Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T14:25:47Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_surround
type: atari_surround
metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_surround** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r edbeeching/atari_2B_atari_surround_1111
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=atari_surround --train_dir=./train_dir --experiment=atari_2B_atari_surround_1111
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=atari_surround --train_dir=./train_dir --experiment=atari_2B_atari_surround_1111 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
matuane/distilbert-base-uncased-finetuned-cola | matuane | 2023-07-11T14:22:57Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-11T03:58:34Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: matuane/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# matuane/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1968
- Validation Loss: 0.5472
- Train Matthews Correlation: 0.5059
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5136 | 0.4554 | 0.4712 | 0 |
| 0.3229 | 0.4651 | 0.5136 | 1 |
| 0.1968 | 0.5472 | 0.5059 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jorgeutd/finetunepeftmodel | Jorgeutd | 2023-07-11T14:21:56Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-11T14:20:39Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
ericNguyen0132/roberta-large-Dep | ericNguyen0132 | 2023-07-11T14:20:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-02T12:57:45Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-Dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-Dep
This model is a fine-tuned version of [rafalposwiata/deproberta-large-depression](https://huggingface.co/rafalposwiata/deproberta-large-depression) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8107
- Accuracy: 0.8517
- F1: 0.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3701 | 0.87 | 0.9264 |
| 0.4293 | 2.0 | 938 | 0.4385 | 0.865 | 0.9219 |
| 0.3302 | 3.0 | 1407 | 0.5293 | 0.85 | 0.9109 |
| 0.2784 | 4.0 | 1876 | 0.7077 | 0.8517 | 0.9118 |
| 0.1914 | 5.0 | 2345 | 0.8107 | 0.8517 | 0.9118 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vk21/a2c-AntBulletEnv-v0-unit6 | vk21 | 2023-07-11T14:05:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-10T23:04:56Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1513.13 +/- 249.07
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sympan/Reinforce-Cart | Sympan | 2023-07-11T13:53:32Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T13:53:23Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cart
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 485.30 +/- 44.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
h2o-llmstudio/falcon-7b-fix | h2o-llmstudio | 2023-07-11T13:46:34Z | 17 | 1 | transformers | [
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2101.00027",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-07-06T09:48:00Z | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
⚠️ **This is an unofficial fork of the original [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) model.**
The following changes have been made:
- Fixing generation configuration setting
- Model now properly uses specified ```attention_mask``` when calling ```scaled_dot_product_attention``` - this also allows to specify custom attention masks and work with left-padded input. However, this will disable additional memory and flash optimization.
# 🚀 Falcon-7B
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
*Paper coming soon* 😊.
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B?
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B.
# Model Card for Falcon-7B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
| Books | 7% | 110B | |
| Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
| Code | 3% | 45B | |
| RefinedWeb-French | 3% | 45B | massive web crawl |
| Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
### Training Procedure
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2304 | 30B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early March 2023 and took about two weeks.
## Evaluation
*Paper coming soon*.
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B is made available under the Apache 2.0 license.
## Contact
[email protected] |
AdiOO7/gpt-neox-bank-complaints | AdiOO7 | 2023-07-11T13:41:20Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-11T13:41:18Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
sjdata/speecht5_finetuned_voxpopuli_nl | sjdata | 2023-07-11T13:37:35Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-07-11T11:50:21Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5217 | 4.3 | 1000 | 0.4827 |
| 0.4955 | 8.61 | 2000 | 0.4678 |
| 0.4936 | 12.91 | 3000 | 0.4666 |
| 0.4936 | 17.21 | 4000 | 0.4626 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BlueAvenir/model_growth_restructuring_V_0_1 | BlueAvenir | 2023-07-11T13:20:12Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-07-11T13:19:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 258 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 258,
"warmup_steps": 26,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Trong-Nghia/roberta-large-detect-dep | Trong-Nghia | 2023-07-11T13:19:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-01T07:55:47Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-detect-dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-detect-dep
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6575
- Accuracy: 0.751
- F1: 0.8184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6351 | 1.0 | 1502 | 0.4975 | 0.783 | 0.8360 |
| 0.6114 | 2.0 | 3004 | 0.5374 | 0.744 | 0.7949 |
| 0.5377 | 3.0 | 4506 | 0.6575 | 0.751 | 0.8184 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-cocnat-mod-datasets3-rarity-all | NasimB | 2023-07-11T13:13:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-11T11:20:45Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-mod-datasets3-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-mod-datasets3-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7201 | 0.3 | 500 | 5.6554 |
| 5.3777 | 0.6 | 1000 | 5.2100 |
| 5.0257 | 0.91 | 1500 | 4.9662 |
| 4.7428 | 1.21 | 2000 | 4.8246 |
| 4.5916 | 1.51 | 2500 | 4.6972 |
| 4.4886 | 1.81 | 3000 | 4.5927 |
| 4.3213 | 2.12 | 3500 | 4.5355 |
| 4.173 | 2.42 | 4000 | 4.4840 |
| 4.1402 | 2.72 | 4500 | 4.4195 |
| 4.0833 | 3.02 | 5000 | 4.3844 |
| 3.8496 | 3.33 | 5500 | 4.3743 |
| 3.8398 | 3.63 | 6000 | 4.3421 |
| 3.8193 | 3.93 | 6500 | 4.3113 |
| 3.6103 | 4.23 | 7000 | 4.3294 |
| 3.5592 | 4.53 | 7500 | 4.3199 |
| 3.5442 | 4.84 | 8000 | 4.3041 |
| 3.4575 | 5.14 | 8500 | 4.3158 |
| 3.3572 | 5.44 | 9000 | 4.3191 |
| 3.3595 | 5.74 | 9500 | 4.3171 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Bhanu9Prakash/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan | Bhanu9Prakash | 2023-07-11T13:05:14Z | 222 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-07-11T12:44:34Z | ---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.92
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3966
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0687 | 1.0 | 113 | 0.6197 | 0.84 |
| 0.299 | 2.0 | 226 | 0.5065 | 0.86 |
| 0.2634 | 3.0 | 339 | 0.5042 | 0.88 |
| 0.0473 | 4.0 | 452 | 0.5413 | 0.88 |
| 0.0033 | 5.0 | 565 | 0.3706 | 0.91 |
| 0.0003 | 6.0 | 678 | 0.4485 | 0.9 |
| 0.2538 | 7.0 | 791 | 0.4006 | 0.9 |
| 0.0002 | 8.0 | 904 | 0.3985 | 0.9 |
| 0.003 | 9.0 | 1017 | 0.3952 | 0.91 |
| 0.0001 | 10.0 | 1130 | 0.3966 | 0.92 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sumitrsch/multiconer2_muril_large_bn | sumitrsch | 2023-07-11T12:41:30Z | 109 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-02T13:05:45Z | For prediction on test data use this link. https://colab.research.google.com/drive/1K-ED0yvMsdciNo52rluauQBEAg-DBomC?usp=sharing
update best_model_path = "sumitrsch/multiconer2_muril_large_bn"
If you are using this code, cite paper "silp_nlp at SemEval-2023 Task 2: Cross-lingual Knowledge Transfer for Mono-lingual Learning"
https://aclanthology.org/2023.semeval-1.164 |
duwuonline/mymodel-generation | duwuonline | 2023-07-11T12:38:20Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-07-11T12:20:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mymodel-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mymodel-generation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4959
- Rouge1: 15.814
- Rouge2: 6.0889
- Rougel: 13.524
- Rougelsum: 13.6797
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 0.6815 | 14.8968 | 4.9117 | 12.5655 | 12.7826 | 19.0 |
| No log | 2.0 | 200 | 0.6100 | 14.9404 | 4.9974 | 12.8103 | 13.0953 | 19.0 |
| No log | 3.0 | 300 | 0.5827 | 14.991 | 5.2082 | 12.9564 | 13.1979 | 19.0 |
| No log | 4.0 | 400 | 0.5568 | 14.9205 | 5.1634 | 12.6664 | 12.8388 | 19.0 |
| 0.8938 | 5.0 | 500 | 0.5352 | 15.2597 | 5.6541 | 13.0388 | 13.1956 | 19.0 |
| 0.8938 | 6.0 | 600 | 0.5212 | 15.4645 | 5.7723 | 13.2198 | 13.3698 | 19.0 |
| 0.8938 | 7.0 | 700 | 0.5098 | 15.4663 | 5.8769 | 13.2799 | 13.403 | 19.0 |
| 0.8938 | 8.0 | 800 | 0.5015 | 16.0013 | 6.2874 | 13.7037 | 13.8538 | 19.0 |
| 0.8938 | 9.0 | 900 | 0.4957 | 15.8722 | 6.1918 | 13.6299 | 13.7783 | 19.0 |
| 0.6764 | 10.0 | 1000 | 0.4959 | 15.814 | 6.0889 | 13.524 | 13.6797 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/NewMarsMix_R11 | digiplay | 2023-07-11T12:33:05Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2023-07-11T00:59:45Z | ---
license: other
---
Models info :
https://civitai.com/models/19321/newmarsmix

|
Tritanium/VG-loras | Tritanium | 2023-07-11T12:29:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-07-11T12:28:36Z | this is a repo where its all anime video game character LoRAs. i didn bother sorting them to make the git clone easier lmao |
srirammadduri-ts/roberta-base-squad2-finetuned-roberta | srirammadduri-ts | 2023-07-11T12:26:34Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-07-11T12:06:31Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-squad2-finetuned-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-roberta
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 0.0008 |
| No log | 2.0 | 4 | 0.0004 |
| No log | 3.0 | 6 | 0.0003 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nolanspecter/Reinforce-Cart-Pole | nolanspecter | 2023-07-11T12:17:32Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T12:16:48Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cart-Pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SachinKaushik/docGPT | SachinKaushik | 2023-07-11T12:14:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-11T11:37:03Z | Instruction Model trained on Code Documentations |
komo-dono/risataneda | komo-dono | 2023-07-11T12:03:18Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-07-11T12:01:51Z | ---
license: openrail
language:
- ja
tags:
- music
risa taneda 600 epoch |
NYTK/sentence-transformers-experimental-hubert-hungarian | NYTK | 2023-07-11T12:02:08Z | 456 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"sentence-similarity",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-07-11T11:27:40Z | ---
license: apache-2.0
language:
- hu
library_name: sentence-transformers
tags:
- sentence-similarity
widget:
- source_sentence: "Szép napunk van."
sentences:
- "Jó az idő."
- "Szép az autó."
- "Elutazok egy napra."
example_title: "Példa"
---
# Hungarian Experimental Sentence-BERT
The pre-trained [huBERT](https://huggingface.co/SZTAKI-HLT/hubert-base-cc) was fine-tuned on the[ Hunglish 2.0](http://mokk.bme.hu/resources/hunglishcorpus) parallel corpus to mimic the [bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/bert-base-nli-stsb-mean-tokens) model provided by UKPLab. Sentence embeddings were obtained by applying mean pooling to the huBERT output. The data was split into training (98%) and validation (2%) sets. By the end of the training, a mean squared error of 0.106 was computed on the validation set. Our code was based on the [Sentence-Transformers](https://www.sbert.net) library. Our model was trained for 2 epochs on a single GTX 1080Ti GPU card with the batch size set to 32. The training took approximately 15 hours.
## Limitations
- max_seq_length = 128
## Usage
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('NYTK/sentence-transformers-experimental-hubert-hungarian')
embeddings = model.encode(sentences)
print(embeddings)
```
## Citation
If you use this model, please cite the following paper:
```
@article {bertopic,
title = {Analyzing Narratives of Patient Experiences: A BERT Topic Modeling Approach},
journal = {Acta Polytechnica Hungarica},
year = {2023},
author = {Osváth, Mátyás and Yang, Zijian Győző and Kósa, Karolina},
pages = {153--171},
volume = {20},
number = {7}
}
``` |
ashnrk/textual_inversion_perm_crop | ashnrk | 2023-07-11T11:57:23Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-11T10:54:55Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_perm_crop
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
ootes/wwtb | ootes | 2023-07-11T11:52:34Z | 0 | 1 | null | [
"arxiv:2211.09800",
"region:us"
]
| null | 2023-07-04T10:03:04Z | # InstructPix2Pix: Learning to Follow Image Editing Instructions
### [Project Page](https://www.timothybrooks.com/instruct-pix2pix/) | [Paper](https://arxiv.org/abs/2211.09800) | [Data](http://instruct-pix2pix.eecs.berkeley.edu/)
PyTorch implementation of InstructPix2Pix, an instruction-based image editing model, based on the original [CompVis/stable_diffusion](https://github.com/CompVis/stable-diffusion) repo. <br>
[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://www.timothybrooks.com/instruct-pix2pix/)
[Tim Brooks](https://www.timothybrooks.com/)\*,
[Aleksander Holynski](https://holynski.org/)\*,
[Alexei A. Efros](https://people.eecs.berkeley.edu/~efros/) <br>
UC Berkeley <br>
\*denotes equal contribution
<img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
## TL;DR: quickstart
Follow the instructions below to download and run InstructPix2Pix on your own images. These instructions have been tested on a GPU with >18GB VRAM. If you don't have a GPU, you may need to change the default configuration, or check out [other ways of using the model](https://github.com/timothybrooks/instruct-pix2pix#other-ways-of-using-instructpix2pix).
### Set up a conda environment, and download a pretrained model:
```
conda env create -f environment.yaml
conda activate ip2p
bash scripts/download_checkpoints.sh
```
### Edit a single image:
```
python edit_cli.py --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
# Optionally, you can specify parameters to tune your result:
# python edit_cli.py --steps 100 --resolution 512 --seed 1371 --cfg-text 7.5 --cfg-image 1.2 --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
```
### Or launch your own interactive editing Gradio app:
```
python edit_app.py
```

_(For advice on how to get the best results by tuning parameters, see the [Tips](https://github.com/timothybrooks/instruct-pix2pix#tips) section)._
## Setup
Install all dependencies with:
```
conda env create -f environment.yaml
```
Download the pretrained models by running:
```
bash scripts/download_checkpoints.sh
```
## Generated Dataset
Our image editing model is trained on a generated dataset consisting of 454,445 examples. Each example contains (1) an input image, (2) an editing instruction, and (3) an output edited image. We provide two versions of the dataset, one in which each pair of edited images is generated 100 times, and the best examples are chosen based on CLIP metrics (Section 3.1.2 in the paper) (`clip-filtered-dataset`), and one in which examples are randomly chosen (`random-sample-dataset`).
For the released version of this dataset, we've additionally filtered prompts and images for NSFW content. After NSFW filtering, the GPT-3 generated dataset contains 451,990 examples. The final image-pair datasets contain:
| | # of image editing examples | Dataset size |
|--|-----------------------|----------------------- |
| `random-sample-dataset` |451990|727GB|
| `clip-filtered-dataset` |313010|436GB|
To download one of these datasets, along with the entire NSFW-filtered text data, run the following command with the appropriate dataset name:
```
bash scripts/download_data.sh clip-filtered-dataset
```
## Training InstructPix2Pix
InstructPix2Pix is trained by fine-tuning from an initial StableDiffusion checkpoint. The first step is to download a Stable Diffusion checkpoint. For our trained models, we used the v1.5 checkpoint as the starting point. To download the same ones we used, you can run the following script:
```
bash scripts/download_pretrained_sd.sh
```
If you'd like to use a different checkpoint, point to it in the config file `configs/train.yaml`, on line 8, after `ckpt_path:`.
Next, we need to change the config to point to our downloaded (or generated) dataset. If you're using the `clip-filtered-dataset` from above, you can skip this. Otherwise, you may need to edit lines 85 and 94 of the config (`data.params.train.params.path`, `data.params.validation.params.path`).
Finally, start a training job with the following command:
```
python main.py --name default --base configs/train.yaml --train --gpus 0,1,2,3,4,5,6,7
```
## Creating your own dataset
Our generated dataset of paired images and editing instructions is made in two phases: First, we use GPT-3 to generate text triplets: (a) a caption describing an image, (b) an edit instruction, (c) a caption describing the image after the edit. Then, we turn pairs of captions (before/after the edit) into pairs of images using Stable Diffusion and Prompt-to-Prompt.
### (1) Generate a dataset of captions and instructions
We provide our generated dataset of captions and edit instructions [here](https://instruct-pix2pix.eecs.berkeley.edu/gpt-generated-prompts.jsonl). If you plan to use our captions+instructions, skip to step (2). Otherwise, if you would like to create your own text dataset, please follow steps (1.1-1.3) below. Note that generating very large datasets using GPT-3 can be expensive.
#### (1.1) Manually write a dataset of instructions and captions
The first step of the process is fine-tuning GPT-3. To do this, we made a dataset of 700 examples broadly covering of edits that we might want our model to be able to perform. Our examples are available [here](https://instruct-pix2pix.eecs.berkeley.edu/human-written-prompts.jsonl). These should be diverse and cover a wide range of possible captions and types of edits. Ideally, they should avoid duplication or significant overlap of captions and instructions. It is also important to be mindful of limitations of Stable Diffusion and Prompt-to-Prompt in writing these examples, such as inability to perform large spatial transformations (e.g., moving the camera, zooming in, swapping object locations).
Input prompts should closely match the distribution of input prompts used to generate the larger dataset. We sampled the 700 input prompts from the _LAION Improved Aesthetics 6.5+_ dataset and also use this dataset for generating examples. We found this dataset is quite noisy (many of the captions are overly long and contain irrelevant text). For this reason, we also considered MSCOCO and LAION-COCO datasets, but ultimately chose _LAION Improved Aesthetics 6.5+_ due to its diversity of content, proper nouns, and artistic mediums. If you choose to use another dataset or combination of datasets as input to GPT-3 when generating examples, we recommend you sample the input prompts from the same distribution when manually writing training examples.
#### (1.2) Finetune GPT-3
The next step is to finetune a large language model on the manually written instructions/outputs to generate edit instructions and edited caption from a new input caption. For this, we finetune GPT-3's Davinci model via the OpenAI API, although other language models could be used.
To prepare training data for GPT-3, one must first create an OpenAI developer account to access the needed APIs, and [set up the API keys on your local device](https://beta.openai.com/docs/api-reference/introduction). Also, run the `prompts/prepare_for_gpt.py` script, which forms the prompts into the correct format by concatenating instructions and captions and adding delimiters and stop sequences.
```bash
python dataset_creation/prepare_for_gpt.py --input-path data/human-written-prompts.jsonl --output-path data/human-written-prompts-for-gpt.jsonl
```
Next, finetune GPT-3 via the OpenAI CLI. We provide an example below, although please refer to OpenAI's official documentation for this, as best practices may change. We trained the Davinci model for a single epoch. You can experiment with smaller less expensive GPT-3 variants or with open source language models, although this may negatively affect performance.
```bash
openai api fine_tunes.create -t data/human-written-prompts-for-gpt.jsonl -m davinci --n_epochs 1 --suffix "instruct-pix2pix"
```
You can test out the finetuned GPT-3 model by launching the provided Gradio app:
```bash
python prompt_app.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME
```

#### (1.3) Generate a large dataset of captions and instructions
We now use the finetuned GPT-3 model to generate a large dataset. Our dataset cost thousands of dollars to create. See `prompts/gen_instructions_and_captions.py` for the script which generates these examples. We recommend first generating a small number of examples (by setting a low value of `--num-samples`) and gradually increasing the scale to ensure the results are working as desired before increasing scale.
```bash
python dataset_creation/generate_txt_dataset.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME
```
If you are generating at a very large scale (e.g., 100K+), it will be noteably faster to generate the dataset with multiple processes running in parallel. This can be accomplished by setting `--partitions=N` to a higher number and running multiple processes, setting each `--partition` to the corresponding value.
```bash
python dataset_creation/generate_txt_dataset.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME --partitions=10 --partition=0
```
### (2) Turn paired captions into paired images
The next step is to turn pairs of text captions into pairs of images. For this, we need to copy some pre-trained Stable Diffusion checkpoints to `stable_diffusion/models/ldm/stable-diffusion-v1/`. You may have already done this if you followed the instructions above for training with our provided data, but if not, you can do this by running:
```bash
bash scripts/download_pretrained_sd.sh
```
For our model, we used [checkpoint v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.ckpt), and the [new autoencoder](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt), but other models may work as well. If you choose to use other models, make sure to change point to the corresponding checkpoints by passing in the `--ckpt` and `--vae-ckpt` arguments. Once all checkpoints have been downloaded, we can generate the dataset with the following command:
```
python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl
```
This command operates on a single GPU (typically a V100 or A100). To parallelize over many GPUs/machines, set `--n-partitions` to the total number of parallel jobs and `--partition` to the index of each job.
```
python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl --n-partitions 100 --partition 0
```
The default parameters match that of our dataset, although in practice you can use a smaller number of steps (e.g., `--steps=25`) to generate high quality data faster. By default, we generate 100 samples per prompt and use CLIP filtering to keep a max of 4 per prompt. You can experiment with fewer samples by setting `--n-samples`. The command below turns off CLIP filtering entirely and is therefore faster:
```
python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl --n-samples 4 --clip-threshold 0 --clip-dir-threshold 0 --clip-img-threshold 0 --n-partitions 100 --partition 0
```
After generating all of the dataset examples, run the following command below to create a list of the examples. This is needed for the dataset onject to efficiently be able to sample examples without needing to iterate over the entire dataset directory at the start of each training run.
```
python dataset_creation/prepare_dataset.py data/instruct-pix2pix-dataset-000
```
## Evaluation
To generate plots like the ones in Figures 8 and 10 in the paper, run the following command:
```
python metrics/compute_metrics.py --ckpt /path/to/your/model.ckpt
```
## Tips
If you're not getting the quality result you want, there may be a few reasons:
1. **Is the image not changing enough?** Your Image CFG weight may be too high. This value dictates how similar the output should be to the input. It's possible your edit requires larger changes from the original image, and your Image CFG weight isn't allowing that. Alternatively, your Text CFG weight may be too low. This value dictates how much to listen to the text instruction. The default Image CFG of 1.5 and Text CFG of 7.5 are a good starting point, but aren't necessarily optimal for each edit. Try:
* Decreasing the Image CFG weight, or
* Increasing the Text CFG weight, or
2. Conversely, **is the image changing too much**, such that the details in the original image aren't preserved? Try:
* Increasing the Image CFG weight, or
* Decreasing the Text CFG weight
3. Try generating results with different random seeds by setting "Randomize Seed" and running generation multiple times. You can also try setting "Randomize CFG" to sample new Text CFG and Image CFG values each time.
4. Rephrasing the instruction sometimes improves results (e.g., "turn him into a dog" vs. "make him a dog" vs. "as a dog").
5. Increasing the number of steps sometimes improves results.
6. Do faces look weird? The Stable Diffusion autoencoder has a hard time with faces that are small in the image. Try cropping the image so the face takes up a larger portion of the frame.
## Comments
- Our codebase is based on the [Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion).
## BibTeX
```
@article{brooks2022instructpix2pix,
title={InstructPix2Pix: Learning to Follow Image Editing Instructions},
author={Brooks, Tim and Holynski, Aleksander and Efros, Alexei A},
journal={arXiv preprint arXiv:2211.09800},
year={2022}
}
```
## Other ways of using InstructPix2Pix
### InstructPix2Pix on [HuggingFace](https://huggingface.co/spaces/timbrooks/instruct-pix2pix):
> A browser-based version of the demo is available as a [HuggingFace space](https://huggingface.co/spaces/timbrooks/instruct-pix2pix). For this version, you only need a browser, a picture you want to edit, and an instruction! Note that this is a shared online demo, and processing time may be slower during peak utilization.
### InstructPix2Pix on [Replicate](https://replicate.com/timothybrooks/instruct-pix2pix):
> Replicate provides a production-ready cloud API for running the InstructPix2Pix model. You can run the model from any environment using a simple API call with cURL, Python, JavaScript, or your language of choice. Replicate also provides a web interface for running the model and sharing predictions.
### InstructPix2Pix in [Imaginairy](https://github.com/brycedrennan/imaginAIry#-edit-images-with-instructions-alone-by-instructpix2pix):
> Imaginairy offers another way of easily installing InstructPix2Pix with a single command. It can run on devices without GPUs (like a Macbook!).
> ```bash
> pip install imaginairy --upgrade
> aimg edit any-image.jpg --gif "turn him into a cyborg"
> ```
> It also offers an easy way to perform a bunch of edits on an image, and can save edits out to an animated GIF:
> ```
> aimg edit --gif --surprise-me pearl-earring.jpg
> ```
> <img src="https://raw.githubusercontent.com/brycedrennan/imaginAIry/7c05c3aae2740278978c5e84962b826e58201bac/assets/girl_with_a_pearl_earring_suprise.gif" width="512">
### InstructPix2Pix in [🧨 Diffusers](https://github.com/huggingface/diffusers):
> InstructPix2Pix in Diffusers is a bit more optimized, so it may be faster and more suitable for GPUs with less memory. Below are instructions for installing the library and editing an image:
> 1. Install diffusers and relevant dependencies:
>
> ```bash
> pip install transformers accelerate torch
>
> pip install git+https://github.com/huggingface/diffusers.git
> ```
>
> 2. Load the model and edit the image:
>
> ```python
>
> import torch
> from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
>
> model_id = "timbrooks/instruct-pix2pix"
> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
> pipe.to("cuda")
> pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
> # `image` is an RGB PIL.Image
> images = pipe("turn him into cyborg", image=image).images
> images[0]
> ```
>
> For more information, check the docs [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix).
|
vineetsharma/ppo-LunarLander-v2 | vineetsharma | 2023-07-11T11:35:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T11:34:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.79 +/- 14.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jasonyim2/distilbert-base-uncased-finetuned-emotion | jasonyim2 | 2023-07-11T11:22:55Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-26T06:45:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215386837894378
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2227
- Accuracy: 0.9215
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8265 | 1.0 | 250 | 0.3204 | 0.9 | 0.8963 |
| 0.2534 | 2.0 | 500 | 0.2227 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
antonioalvarado/text_analyzer_base_bert | antonioalvarado | 2023-07-11T11:21:46Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-11T10:55:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: text_analyzer_base_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_analyzer_base_bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0472
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3672 | 1.0 | 1728 | 0.1788 | 0.9469 |
| 0.1509 | 2.0 | 3456 | 0.1311 | 0.9769 |
| 0.0071 | 3.0 | 5184 | 0.0494 | 0.9861 |
| 0.0076 | 4.0 | 6912 | 0.0472 | 0.9861 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.12.0+cu102
- Datasets 2.13.1
- Tokenizers 0.13.3
|
1aurent/CartPole-v1 | 1aurent | 2023-07-11T11:15:03Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T10:42:02Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 498.08 +/- 19.05
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ashnrk/textual_inversion_pasture | ashnrk | 2023-07-11T10:54:44Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-11T09:52:17Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_pasture
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
nickw9/ppo-LunarLander-v2 | nickw9 | 2023-07-11T10:48:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T10:48:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.15 +/- 10.89
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
digiplay/RealEpicMajicRevolution_v1 | digiplay | 2023-07-11T10:42:18Z | 393 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-11T09:48:27Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/107185/real-epic-majic-revolution
Original Author's DEMO images :


|
F-Haru/paraphrase-mpnet-base-v2_09-04-MarginMSELoss-finetuning-7-5 | F-Haru | 2023-07-11T10:29:25Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-07-11T09:35:14Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
ファインチューニングする時のNegative ja-en, en-jaのコサイン類似度が0.9以上0.4以下のみで
ファインチューニングをした後に、
教師モデルをparaphrase-mpnet-base-v2で知識蒸留をしたモデル
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1686 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 1000,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
miki-kawa/huggingdatavit-base-beans | miki-kawa | 2023-07-11T10:22:59Z | 193 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-07-11T09:55:51Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: huggingdatavit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huggingdatavit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0356
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1059 | 1.54 | 100 | 0.0356 | 0.9925 |
| 0.0256 | 3.08 | 200 | 0.0663 | 0.9774 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.11.0
|
Krish23/Tujgc | Krish23 | 2023-07-11T10:22:51Z | 0 | 0 | null | [
"license:cc-by-nc-sa-2.0",
"region:us"
]
| null | 2023-07-11T10:22:51Z | ---
license: cc-by-nc-sa-2.0
---
|
thomsonreuters/budgetlongformer-diverse | thomsonreuters | 2023-07-11T10:09:14Z | 43 | 10 | transformers | [
"transformers",
"pytorch",
"longformer",
"en",
"dataset:pile-of-law/pile-of-law",
"arxiv:2211.17135",
"arxiv:2207.00220",
"arxiv:1910.09700",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-07-10T16:23:59Z | ---
datasets:
- pile-of-law/pile-of-law
language:
- en
library_name: transformers
license: other
---
# Model Card for budgetlongformer-diverse
<!-- Provide a quick summary of what the model is/does. [Optional] -->
Legal pretrained model using Replaced Token Detection (RTD) task, trained on Pile-of-Law dataset with 4096 tokens as context windows.
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
Legal pretrained model using ELECTRA objective task, trained on Pile-of-Law dataset with 4096 tokens as context windows.
- **Developed by:** Joel Niklaus, Daniele Giofré
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** other
- **Resources for more information:**
- [Associated Paper](https://arxiv.org/abs/2211.17135)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The model can directly be used to generate embeddings for example for similarity search. It likely works best on US focused legal data.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The model can be finetuned for any NLU task or when coupled with a decoder also for generative tasks. In our experiments on summarization with the BillSum dataset, we found that random initialization of the decoder improved performance.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
This model will likely work worse on non-legal text in non-English languages originating from outside the US.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Considerations about the training dataset
### Social Impact of Dataset
As described in the dataset card, the internal variation allows contextual privacy rules to be learned. If robust mechanisms for this are developed they can applied more broadly.
As discussed in ``On the Opportunities and Risks of Foundation Models'', legal language models can help improve access to justice in various ways.
But they can also be used in potentially harmful ways. While such models are not ready for most production environments and are the subject of significant research,
we ask that model users and model creators using this model, particularly when creating generative models (e.g. attaching a decoder), consider the impacts of their model and make a good faith effort to weigh the benefits against the harms of their method.
As our license, the training dataset license also restricts commercial usage.
## Discussion of Biases
The data reflects the biases of governments and courts. As discussed in their work [Pile of Law](https://arxiv.org/abs/2207.00220), these can be significant, though more recent text will likely be less overtly toxic.
Please consult the above statement and keep it in mind in the use and/or any modification of this model, implementing responsible use.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
As with any large LM there is the risk of it producing biased or unfair output. Researchers using the model should put into place respective safeguards to identify biased and/or toxic language.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The diverse model was trained on caselaw (“Court Listener Opinions” & “Court Listener Docket Entry Documents”), legislation (“US Code”, “State Codes” & “EURLEX”) and contracts (“Atticus Contracts” & “EDGAR Contracts”) from public Pile-of-Law dataset. To balance the training data, we limited the number of documents to 500K (this affects Court Listener Opinions, Court Listener Docket Entry Documents and EDGAR Contracts).
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
We tested the model on the BillSum and PubMed summarization datasets achieving SotA Rouge scores for the respective parameter sizes in August 2022.
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
We followed the standard in research on summarization datasets and used Rouge 1, 2 and L.
## Results
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 4 x 16GB NVIDIA V100
- **Hours used:** 144
- **Cloud Provider:** AWS
- **Compute Region:** US East
- **Carbon Emitted:** 15.98
## Model Architecture and Objective
We used a Longformer attention window of 256 as generator and discriminator. The generator model was three times smaller than the discriminator model. In particular, the generator’s depth (number of hidden layers) instead of its width (embedding size, hidden size and intermediate size). We used a MLM probability of 25\% for the generator.
## Compute Infrastructure
Amazon SageMaker Notebooks.
### Hardware
4 x 16GB NVIDIA V100
### Software
transformers
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{niklaus2022budgetlongformer,
title={BudgetLongformer: Can we Cheaply Pretrain a SotA Legal Language Model From Scratch?},
author={Joel Niklaus and Daniele Giofré},
year={2022},
eprint={2211.17135},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
# Model Card Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
Joel Niklaus, Daniele Giofré |
ivivnov/ppo-LunarLander-v2 | ivivnov | 2023-07-11T09:56:04Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T09:55:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.61 +/- 15.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits