modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 12:28:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 12:27:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DiederikMartens/mBERT_sa_cv_10_fold7 | DiederikMartens | "2024-05-26T07:04:25Z" | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-26T06:38:23Z" | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_10_fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_10_fold7
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4939
- F1: 0.6225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.6054 | 0.4499 |
| 0.5283 | 2.0 | 802 | 0.4316 | 0.5629 |
| 0.3316 | 3.0 | 1203 | 0.4939 | 0.6225 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
MayBashendy/ArabicNewSplits4_FineTuningAraBERT_run3_AugV5_k11_task3_organization | MayBashendy | "2024-12-09T18:56:27Z" | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-09T18:52:53Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits4_FineTuningAraBERT_run3_AugV5_k11_task3_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits4_FineTuningAraBERT_run3_AugV5_k11_task3_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6027
- Qwk: 0.3929
- Mse: 0.6027
- Rmse: 0.7763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0444 | 2 | 3.3201 | -0.0078 | 3.3201 | 1.8221 |
| No log | 0.0889 | 4 | 2.1564 | -0.0182 | 2.1564 | 1.4685 |
| No log | 0.1333 | 6 | 1.2681 | 0.0255 | 1.2681 | 1.1261 |
| No log | 0.1778 | 8 | 1.0806 | 0.0418 | 1.0806 | 1.0395 |
| No log | 0.2222 | 10 | 0.6667 | 0.1373 | 0.6667 | 0.8165 |
| No log | 0.2667 | 12 | 0.6557 | 0.0569 | 0.6557 | 0.8098 |
| No log | 0.3111 | 14 | 0.9597 | 0.0222 | 0.9597 | 0.9797 |
| No log | 0.3556 | 16 | 1.1741 | -0.0268 | 1.1741 | 1.0836 |
| No log | 0.4 | 18 | 0.8984 | 0.0 | 0.8984 | 0.9479 |
| No log | 0.4444 | 20 | 0.6897 | 0.2653 | 0.6897 | 0.8305 |
| No log | 0.4889 | 22 | 0.6600 | 0.3185 | 0.6600 | 0.8124 |
| No log | 0.5333 | 24 | 0.6442 | 0.2653 | 0.6442 | 0.8026 |
| No log | 0.5778 | 26 | 0.5881 | 0.2381 | 0.5881 | 0.7669 |
| No log | 0.6222 | 28 | 0.5561 | 0.0 | 0.5561 | 0.7457 |
| No log | 0.6667 | 30 | 0.5814 | 0.0 | 0.5814 | 0.7625 |
| No log | 0.7111 | 32 | 0.5574 | 0.0 | 0.5574 | 0.7466 |
| No log | 0.7556 | 34 | 0.5284 | 0.0 | 0.5284 | 0.7269 |
| No log | 0.8 | 36 | 0.5304 | 0.1220 | 0.5304 | 0.7283 |
| No log | 0.8444 | 38 | 0.5359 | 0.1220 | 0.5359 | 0.7320 |
| No log | 0.8889 | 40 | 0.5469 | 0.1220 | 0.5469 | 0.7396 |
| No log | 0.9333 | 42 | 0.6874 | 0.2381 | 0.6874 | 0.8291 |
| No log | 0.9778 | 44 | 0.9653 | 0.0345 | 0.9653 | 0.9825 |
| No log | 1.0222 | 46 | 0.8084 | 0.0769 | 0.8084 | 0.8991 |
| No log | 1.0667 | 48 | 0.5957 | 0.0 | 0.5957 | 0.7718 |
| No log | 1.1111 | 50 | 0.5821 | 0.0 | 0.5821 | 0.7630 |
| No log | 1.1556 | 52 | 0.5643 | 0.0 | 0.5643 | 0.7512 |
| No log | 1.2 | 54 | 0.5666 | 0.1220 | 0.5666 | 0.7527 |
| No log | 1.2444 | 56 | 0.6357 | 0.2109 | 0.6357 | 0.7973 |
| No log | 1.2889 | 58 | 0.6520 | 0.1795 | 0.6520 | 0.8075 |
| No log | 1.3333 | 60 | 0.6698 | 0.2000 | 0.6698 | 0.8184 |
| No log | 1.3778 | 62 | 0.6362 | -0.0081 | 0.6362 | 0.7976 |
| No log | 1.4222 | 64 | 0.6229 | 0.0 | 0.6229 | 0.7892 |
| No log | 1.4667 | 66 | 0.6367 | 0.0 | 0.6367 | 0.7979 |
| No log | 1.5111 | 68 | 0.6236 | 0.0 | 0.6236 | 0.7897 |
| No log | 1.5556 | 70 | 0.5986 | 0.0 | 0.5986 | 0.7737 |
| No log | 1.6 | 72 | 0.6432 | 0.2485 | 0.6432 | 0.8020 |
| No log | 1.6444 | 74 | 0.6963 | 0.2157 | 0.6963 | 0.8344 |
| No log | 1.6889 | 76 | 0.5737 | 0.2464 | 0.5737 | 0.7574 |
| No log | 1.7333 | 78 | 0.5570 | 0.0 | 0.5570 | 0.7463 |
| No log | 1.7778 | 80 | 0.5651 | 0.0 | 0.5651 | 0.7518 |
| No log | 1.8222 | 82 | 0.5795 | 0.0 | 0.5795 | 0.7613 |
| No log | 1.8667 | 84 | 0.5827 | 0.0 | 0.5827 | 0.7633 |
| No log | 1.9111 | 86 | 0.5771 | -0.0159 | 0.5771 | 0.7597 |
| No log | 1.9556 | 88 | 0.5697 | -0.0159 | 0.5697 | 0.7548 |
| No log | 2.0 | 90 | 0.5483 | 0.3333 | 0.5483 | 0.7404 |
| No log | 2.0444 | 92 | 0.6888 | 0.2410 | 0.6888 | 0.8299 |
| No log | 2.0889 | 94 | 1.6949 | 0.1708 | 1.6949 | 1.3019 |
| No log | 2.1333 | 96 | 1.6558 | 0.1963 | 1.6558 | 1.2868 |
| No log | 2.1778 | 98 | 1.2240 | 0.2516 | 1.2240 | 1.1064 |
| No log | 2.2222 | 100 | 0.6313 | 0.1429 | 0.6313 | 0.7946 |
| No log | 2.2667 | 102 | 0.5937 | 0.1467 | 0.5937 | 0.7705 |
| No log | 2.3111 | 104 | 0.5742 | 0.1206 | 0.5742 | 0.7578 |
| No log | 2.3556 | 106 | 0.5761 | 0.0638 | 0.5761 | 0.7590 |
| No log | 2.4 | 108 | 0.6054 | 0.1282 | 0.6054 | 0.7781 |
| No log | 2.4444 | 110 | 0.6580 | 0.1724 | 0.6580 | 0.8112 |
| No log | 2.4889 | 112 | 0.6733 | 0.25 | 0.6733 | 0.8206 |
| No log | 2.5333 | 114 | 0.5495 | 0.1884 | 0.5495 | 0.7413 |
| No log | 2.5778 | 116 | 0.6285 | 0.0222 | 0.6285 | 0.7928 |
| No log | 2.6222 | 118 | 0.6476 | 0.0222 | 0.6476 | 0.8048 |
| No log | 2.6667 | 120 | 0.5985 | 0.0210 | 0.5985 | 0.7736 |
| No log | 2.7111 | 122 | 0.5902 | 0.0949 | 0.5902 | 0.7682 |
| No log | 2.7556 | 124 | 0.6928 | 0.1648 | 0.6928 | 0.8323 |
| No log | 2.8 | 126 | 0.9395 | 0.1193 | 0.9395 | 0.9693 |
| No log | 2.8444 | 128 | 0.9241 | 0.1174 | 0.9241 | 0.9613 |
| No log | 2.8889 | 130 | 1.1123 | 0.0427 | 1.1123 | 1.0546 |
| No log | 2.9333 | 132 | 0.9993 | 0.0042 | 0.9993 | 0.9996 |
| No log | 2.9778 | 134 | 0.7745 | 0.1238 | 0.7745 | 0.8800 |
| No log | 3.0222 | 136 | 0.6917 | 0.2381 | 0.6917 | 0.8317 |
| No log | 3.0667 | 138 | 0.5782 | 0.1628 | 0.5782 | 0.7604 |
| No log | 3.1111 | 140 | 0.5753 | 0.2366 | 0.5753 | 0.7585 |
| No log | 3.1556 | 142 | 0.5825 | 0.2727 | 0.5825 | 0.7632 |
| No log | 3.2 | 144 | 0.6604 | 0.2090 | 0.6604 | 0.8127 |
| No log | 3.2444 | 146 | 0.8224 | 0.1770 | 0.8224 | 0.9069 |
| No log | 3.2889 | 148 | 0.7121 | 0.3035 | 0.7121 | 0.8439 |
| No log | 3.3333 | 150 | 0.6004 | 0.3535 | 0.6004 | 0.7748 |
| No log | 3.3778 | 152 | 0.6427 | 0.3427 | 0.6427 | 0.8017 |
| No log | 3.4222 | 154 | 0.6224 | 0.3623 | 0.6224 | 0.7889 |
| No log | 3.4667 | 156 | 0.7279 | 0.3833 | 0.7279 | 0.8532 |
| No log | 3.5111 | 158 | 0.7266 | 0.3091 | 0.7266 | 0.8524 |
| No log | 3.5556 | 160 | 0.6224 | 0.2688 | 0.6224 | 0.7889 |
| No log | 3.6 | 162 | 0.6484 | 0.25 | 0.6484 | 0.8052 |
| No log | 3.6444 | 164 | 0.5975 | 0.1823 | 0.5975 | 0.7730 |
| No log | 3.6889 | 166 | 0.5780 | 0.3446 | 0.5780 | 0.7603 |
| No log | 3.7333 | 168 | 0.6624 | 0.2990 | 0.6624 | 0.8139 |
| No log | 3.7778 | 170 | 0.5558 | 0.3548 | 0.5558 | 0.7455 |
| No log | 3.8222 | 172 | 0.5712 | 0.3016 | 0.5712 | 0.7558 |
| No log | 3.8667 | 174 | 0.6471 | 0.3180 | 0.6471 | 0.8044 |
| No log | 3.9111 | 176 | 0.6560 | 0.3153 | 0.6560 | 0.8099 |
| No log | 3.9556 | 178 | 0.6749 | 0.3214 | 0.6749 | 0.8215 |
| No log | 4.0 | 180 | 0.6877 | 0.3128 | 0.6877 | 0.8293 |
| No log | 4.0444 | 182 | 0.6013 | 0.3237 | 0.6013 | 0.7754 |
| No log | 4.0889 | 184 | 0.5923 | 0.3077 | 0.5923 | 0.7696 |
| No log | 4.1333 | 186 | 0.6032 | 0.3208 | 0.6032 | 0.7767 |
| No log | 4.1778 | 188 | 0.6746 | 0.3648 | 0.6746 | 0.8213 |
| No log | 4.2222 | 190 | 0.8272 | 0.3386 | 0.8272 | 0.9095 |
| No log | 4.2667 | 192 | 0.8794 | 0.2253 | 0.8794 | 0.9378 |
| No log | 4.3111 | 194 | 0.6860 | 0.3080 | 0.6860 | 0.8282 |
| No log | 4.3556 | 196 | 0.6884 | 0.3739 | 0.6884 | 0.8297 |
| No log | 4.4 | 198 | 0.6829 | 0.3548 | 0.6829 | 0.8264 |
| No log | 4.4444 | 200 | 0.8576 | 0.2558 | 0.8576 | 0.9261 |
| No log | 4.4889 | 202 | 0.9509 | 0.2481 | 0.9509 | 0.9751 |
| No log | 4.5333 | 204 | 0.7228 | 0.3684 | 0.7228 | 0.8502 |
| No log | 4.5778 | 206 | 0.7320 | 0.3214 | 0.7320 | 0.8556 |
| No log | 4.6222 | 208 | 0.8290 | 0.2374 | 0.8290 | 0.9105 |
| No log | 4.6667 | 210 | 0.7269 | 0.3116 | 0.7269 | 0.8526 |
| No log | 4.7111 | 212 | 0.8206 | 0.2881 | 0.8206 | 0.9059 |
| No log | 4.7556 | 214 | 0.9599 | 0.2180 | 0.9599 | 0.9798 |
| No log | 4.8 | 216 | 0.9889 | 0.1882 | 0.9889 | 0.9945 |
| No log | 4.8444 | 218 | 0.8963 | 0.2308 | 0.8963 | 0.9467 |
| No log | 4.8889 | 220 | 0.6866 | 0.2676 | 0.6866 | 0.8286 |
| No log | 4.9333 | 222 | 0.9032 | 0.2558 | 0.9032 | 0.9504 |
| No log | 4.9778 | 224 | 1.1110 | 0.1742 | 1.1110 | 1.0541 |
| No log | 5.0222 | 226 | 0.9424 | 0.1692 | 0.9424 | 0.9708 |
| No log | 5.0667 | 228 | 0.6378 | 0.3433 | 0.6378 | 0.7986 |
| No log | 5.1111 | 230 | 0.7031 | 0.2920 | 0.7031 | 0.8385 |
| No log | 5.1556 | 232 | 0.9208 | 0.2558 | 0.9208 | 0.9596 |
| No log | 5.2 | 234 | 0.9407 | 0.2558 | 0.9407 | 0.9699 |
| No log | 5.2444 | 236 | 0.7574 | 0.3739 | 0.7574 | 0.8703 |
| No log | 5.2889 | 238 | 0.6693 | 0.3188 | 0.6693 | 0.8181 |
| No log | 5.3333 | 240 | 0.6288 | 0.3365 | 0.6288 | 0.7929 |
| No log | 5.3778 | 242 | 0.6408 | 0.3271 | 0.6408 | 0.8005 |
| No log | 5.4222 | 244 | 0.6498 | 0.2676 | 0.6498 | 0.8061 |
| No log | 5.4667 | 246 | 0.6840 | 0.2793 | 0.6840 | 0.8271 |
| No log | 5.5111 | 248 | 0.7501 | 0.3220 | 0.7501 | 0.8661 |
| No log | 5.5556 | 250 | 0.7037 | 0.2793 | 0.7037 | 0.8389 |
| No log | 5.6 | 252 | 0.6799 | 0.3153 | 0.6799 | 0.8246 |
| No log | 5.6444 | 254 | 0.6919 | 0.3422 | 0.6919 | 0.8318 |
| No log | 5.6889 | 256 | 0.7391 | 0.3171 | 0.7391 | 0.8597 |
| No log | 5.7333 | 258 | 0.8537 | 0.3147 | 0.8537 | 0.9240 |
| No log | 5.7778 | 260 | 0.8377 | 0.3785 | 0.8377 | 0.9153 |
| No log | 5.8222 | 262 | 0.8723 | 0.3103 | 0.8723 | 0.9340 |
| No log | 5.8667 | 264 | 0.7842 | 0.2771 | 0.7842 | 0.8856 |
| No log | 5.9111 | 266 | 0.6816 | 0.2986 | 0.6816 | 0.8256 |
| No log | 5.9556 | 268 | 0.6536 | 0.3427 | 0.6536 | 0.8085 |
| No log | 6.0 | 270 | 0.6851 | 0.4 | 0.6851 | 0.8277 |
| No log | 6.0444 | 272 | 0.6335 | 0.3200 | 0.6335 | 0.7959 |
| No log | 6.0889 | 274 | 0.6691 | 0.3793 | 0.6691 | 0.8180 |
| No log | 6.1333 | 276 | 0.7393 | 0.3414 | 0.7393 | 0.8598 |
| No log | 6.1778 | 278 | 0.6756 | 0.4043 | 0.6756 | 0.8220 |
| No log | 6.2222 | 280 | 0.6341 | 0.3301 | 0.6341 | 0.7963 |
| No log | 6.2667 | 282 | 0.7684 | 0.4062 | 0.7684 | 0.8766 |
| No log | 6.3111 | 284 | 0.7593 | 0.4150 | 0.7593 | 0.8714 |
| No log | 6.3556 | 286 | 0.6309 | 0.3028 | 0.6309 | 0.7943 |
| No log | 6.4 | 288 | 0.6847 | 0.3667 | 0.6847 | 0.8275 |
| No log | 6.4444 | 290 | 0.8873 | 0.2353 | 0.8873 | 0.9420 |
| No log | 6.4889 | 292 | 1.0935 | 0.1892 | 1.0935 | 1.0457 |
| No log | 6.5333 | 294 | 1.0178 | 0.1886 | 1.0178 | 1.0089 |
| No log | 6.5778 | 296 | 0.7707 | 0.3909 | 0.7707 | 0.8779 |
| No log | 6.6222 | 298 | 0.6013 | 0.3684 | 0.6013 | 0.7754 |
| No log | 6.6667 | 300 | 0.6337 | 0.3455 | 0.6337 | 0.7961 |
| No log | 6.7111 | 302 | 0.6239 | 0.3103 | 0.6239 | 0.7899 |
| No log | 6.7556 | 304 | 0.6079 | 0.3575 | 0.6079 | 0.7797 |
| No log | 6.8 | 306 | 0.6619 | 0.3333 | 0.6619 | 0.8136 |
| No log | 6.8444 | 308 | 0.8139 | 0.392 | 0.8139 | 0.9022 |
| No log | 6.8889 | 310 | 0.9559 | 0.2334 | 0.9559 | 0.9777 |
| No log | 6.9333 | 312 | 0.8921 | 0.3156 | 0.8921 | 0.9445 |
| No log | 6.9778 | 314 | 0.7626 | 0.36 | 0.7626 | 0.8732 |
| No log | 7.0222 | 316 | 0.6937 | 0.2593 | 0.6937 | 0.8329 |
| No log | 7.0667 | 318 | 0.7096 | 0.3394 | 0.7096 | 0.8423 |
| No log | 7.1111 | 320 | 0.7189 | 0.3363 | 0.7189 | 0.8479 |
| No log | 7.1556 | 322 | 0.7423 | 0.3668 | 0.7423 | 0.8615 |
| No log | 7.2 | 324 | 0.8266 | 0.3307 | 0.8266 | 0.9092 |
| No log | 7.2444 | 326 | 0.8677 | 0.3716 | 0.8677 | 0.9315 |
| No log | 7.2889 | 328 | 0.8469 | 0.3716 | 0.8469 | 0.9203 |
| No log | 7.3333 | 330 | 0.7603 | 0.3735 | 0.7603 | 0.8720 |
| No log | 7.3778 | 332 | 0.7135 | 0.2667 | 0.7135 | 0.8447 |
| No log | 7.4222 | 334 | 0.7157 | 0.2667 | 0.7157 | 0.8460 |
| No log | 7.4667 | 336 | 0.7646 | 0.3821 | 0.7646 | 0.8744 |
| No log | 7.5111 | 338 | 0.8346 | 0.3569 | 0.8346 | 0.9136 |
| No log | 7.5556 | 340 | 0.7970 | 0.3871 | 0.7970 | 0.8927 |
| No log | 7.6 | 342 | 0.7575 | 0.3613 | 0.7575 | 0.8704 |
| No log | 7.6444 | 344 | 0.7888 | 0.3613 | 0.7888 | 0.8881 |
| No log | 7.6889 | 346 | 0.7303 | 0.3247 | 0.7303 | 0.8546 |
| No log | 7.7333 | 348 | 0.6637 | 0.3043 | 0.6637 | 0.8147 |
| No log | 7.7778 | 350 | 0.6358 | 0.3043 | 0.6358 | 0.7974 |
| No log | 7.8222 | 352 | 0.6388 | 0.3043 | 0.6388 | 0.7992 |
| No log | 7.8667 | 354 | 0.7111 | 0.3333 | 0.7111 | 0.8432 |
| No log | 7.9111 | 356 | 0.8050 | 0.3504 | 0.8050 | 0.8972 |
| No log | 7.9556 | 358 | 0.9536 | 0.2510 | 0.9536 | 0.9765 |
| No log | 8.0 | 360 | 0.9613 | 0.2829 | 0.9613 | 0.9805 |
| No log | 8.0444 | 362 | 0.9166 | 0.3092 | 0.9166 | 0.9574 |
| No log | 8.0889 | 364 | 0.8510 | 0.3360 | 0.8510 | 0.9225 |
| No log | 8.1333 | 366 | 0.7434 | 0.3613 | 0.7434 | 0.8622 |
| No log | 8.1778 | 368 | 0.7212 | 0.3559 | 0.7212 | 0.8492 |
| No log | 8.2222 | 370 | 0.6744 | 0.3362 | 0.6744 | 0.8212 |
| No log | 8.2667 | 372 | 0.6466 | 0.3305 | 0.6466 | 0.8041 |
| No log | 8.3111 | 374 | 0.6599 | 0.3613 | 0.6599 | 0.8123 |
| No log | 8.3556 | 376 | 0.7053 | 0.3251 | 0.7053 | 0.8398 |
| No log | 8.4 | 378 | 0.7657 | 0.3770 | 0.7657 | 0.8751 |
| No log | 8.4444 | 380 | 0.7936 | 0.4016 | 0.7936 | 0.8909 |
| No log | 8.4889 | 382 | 0.8430 | 0.3871 | 0.8430 | 0.9181 |
| No log | 8.5333 | 384 | 0.8397 | 0.3871 | 0.8397 | 0.9163 |
| No log | 8.5778 | 386 | 0.7706 | 0.4104 | 0.7706 | 0.8778 |
| No log | 8.6222 | 388 | 0.7063 | 0.3080 | 0.7063 | 0.8404 |
| No log | 8.6667 | 390 | 0.6784 | 0.2759 | 0.6784 | 0.8236 |
| No log | 8.7111 | 392 | 0.6797 | 0.2759 | 0.6797 | 0.8244 |
| No log | 8.7556 | 394 | 0.6910 | 0.3504 | 0.6910 | 0.8313 |
| No log | 8.8 | 396 | 0.6741 | 0.3188 | 0.6741 | 0.8210 |
| No log | 8.8444 | 398 | 0.6435 | 0.2793 | 0.6435 | 0.8022 |
| No log | 8.8889 | 400 | 0.6098 | 0.3394 | 0.6098 | 0.7809 |
| No log | 8.9333 | 402 | 0.5859 | 0.3365 | 0.5859 | 0.7655 |
| No log | 8.9778 | 404 | 0.5797 | 0.3786 | 0.5797 | 0.7614 |
| No log | 9.0222 | 406 | 0.5770 | 0.3365 | 0.5770 | 0.7596 |
| No log | 9.0667 | 408 | 0.5743 | 0.3462 | 0.5743 | 0.7578 |
| No log | 9.1111 | 410 | 0.5792 | 0.3394 | 0.5792 | 0.7611 |
| No log | 9.1556 | 412 | 0.6050 | 0.3929 | 0.6050 | 0.7778 |
| No log | 9.2 | 414 | 0.6338 | 0.3593 | 0.6338 | 0.7961 |
| No log | 9.2444 | 416 | 0.6668 | 0.3898 | 0.6668 | 0.8166 |
| No log | 9.2889 | 418 | 0.6834 | 0.4191 | 0.6834 | 0.8267 |
| No log | 9.3333 | 420 | 0.6871 | 0.4191 | 0.6871 | 0.8289 |
| No log | 9.3778 | 422 | 0.6875 | 0.4191 | 0.6875 | 0.8291 |
| No log | 9.4222 | 424 | 0.6735 | 0.4191 | 0.6735 | 0.8207 |
| No log | 9.4667 | 426 | 0.6484 | 0.3593 | 0.6484 | 0.8052 |
| No log | 9.5111 | 428 | 0.6208 | 0.3929 | 0.6208 | 0.7879 |
| No log | 9.5556 | 430 | 0.5994 | 0.3929 | 0.5994 | 0.7742 |
| No log | 9.6 | 432 | 0.5959 | 0.3929 | 0.5959 | 0.7719 |
| No log | 9.6444 | 434 | 0.5895 | 0.3929 | 0.5895 | 0.7678 |
| No log | 9.6889 | 436 | 0.5845 | 0.3153 | 0.5845 | 0.7645 |
| No log | 9.7333 | 438 | 0.5845 | 0.3153 | 0.5845 | 0.7646 |
| No log | 9.7778 | 440 | 0.5889 | 0.3929 | 0.5889 | 0.7674 |
| No log | 9.8222 | 442 | 0.5919 | 0.3929 | 0.5919 | 0.7693 |
| No log | 9.8667 | 444 | 0.5944 | 0.3929 | 0.5944 | 0.7709 |
| No log | 9.9111 | 446 | 0.5984 | 0.3929 | 0.5984 | 0.7736 |
| No log | 9.9556 | 448 | 0.6014 | 0.3929 | 0.6014 | 0.7755 |
| No log | 10.0 | 450 | 0.6027 | 0.3929 | 0.6027 | 0.7763 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Skylaude/WizardLM-2-4x7B-MoE-exl2-3_5bpw | Skylaude | "2024-04-18T02:28:29Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"MoE",
"merge",
"mergekit",
"Mistral",
"Microsoft/WizardLM-2-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2024-04-17T20:49:22Z" | ---
license: apache-2.0
tags:
- MoE
- merge
- mergekit
- Mistral
- Microsoft/WizardLM-2-7B
---
# WizardLM-2-4x7B-MoE-exl2-3_5bpw
This is a quantized version of [WizardLM-2-4x7B-MoE](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE) an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). Quantization was done using version 0.0.18 of [ExLlamaV2](https://github.com/turboderp/exllamav2).
Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.
For more information see the [original repository](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE).
|
JeswinMS4/distilbert-finetuned | JeswinMS4 | "2023-09-29T05:12:08Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:JeswinMS4/code_text_classifier",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-27T08:05:04Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-finetuned
results: []
datasets:
- JeswinMS4/code_text_classifier
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an code-text-classifier dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0033
- Accuracy: 1.0
- F1: 1.0
## Model description
Finetuned model of distilbert for intent classification.
## Intended uses & limitations
Classify Questions/User's Prompt for either code generation or text generation.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.2209 | 1.0 | 52 | 0.0111 | 1.0 | 1.0 |
| 0.0114 | 2.0 | 104 | 0.0041 | 1.0 | 1.0 |
| 0.0048 | 3.0 | 156 | 0.0033 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 |
AdapterHub/xlm-roberta-large_mlki_ts_pfeiffer | AdapterHub | "2024-05-26T12:23:53Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"xlm-roberta",
"adapterhub:mlki/ts",
"arxiv:2210.13617",
"license:apache-2.0",
"region:us"
] | null | "2024-05-26T12:23:50Z" | ---
tags:
- xlm-roberta
- adapter-transformers
- adapterhub:mlki/ts
license: "apache-2.0"
---
# Adapter `xlm-roberta-large_mlki_ts_pfeiffer` for xlm-roberta-large
**Note: This adapter was not trained by the AdapterHub team, but by these author(s): Yifan Hou.
See author details below.**
Knowledge adapter set for multilingual knowledge graph integration. This adapter is for factual triple enhancement (sentence-level). We trained it with triples from T-REx across 84 languages.
**This adapter was created for usage with the [Adapters](https://github.com/Adapter-Hub/adapters) library.**
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("xlm-roberta-large")
adapter_name = model.load_adapter("AdapterHub/xlm-roberta-large_mlki_ts_pfeiffer")
model.set_active_adapters(adapter_name)
```
## Architecture & Training
- Adapter architecture: pfeiffer
- Prediction head: None
- Dataset: [MLKI_TS](https://github.com/yifan-h/Multilingual_Space)
## Author Information
- Author name(s): Yifan Hou
- Author email: [email protected]
- Author links: [Website](https://yifan-h.github.io/), [GitHub](https://github.com/eth-nlped), [Twitter](https://twitter.com/yyyyyyyyifan)
## Citation
```bibtex
@article{hou2022adapters, title={Adapters for Enhanced Modeling of Multilingual Knowledge and Text}, author={Hou, Yifan and Jiao, Wenxiang and Liu, Meizhen and Allen, Carl and Tu, Zhaopeng and Sachan, Mrinmaya}, journal={arXiv preprint arXiv:2210.13617}, year={2022} }
```
*This adapter has been auto-imported from https://github.com/Adapter-Hub/Hub/blob/master/adapters/mlki/xlm-roberta-large_mlki_ts_pfeiffer.yaml*. |
seongwoon/labor_space_firm | seongwoon | "2023-03-15T08:08:47Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-03-15T05:24:01Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: labor_space_firm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# labor_space_firm
This model is a fine-tuned version of [seongwoon/labor_space_firm_part1](https://huggingface.co/seongwoon/labor_space_firm_part1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
lesso08/48a9d02a-e7c1-483a-8dfa-dfa09284849a | lesso08 | "2025-02-24T02:15:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | "2025-02-24T01:16:15Z" | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 48a9d02a-e7c1-483a-8dfa-dfa09284849a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6efa2767c16f7971_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6efa2767c16f7971_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso08/48a9d02a-e7c1-483a-8dfa-dfa09284849a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000208
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/6efa2767c16f7971_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 80
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e0fef767-569b-4ecd-964a-5fa510437ddc
wandb_project: 08a
wandb_run: your_name
wandb_runid: e0fef767-569b-4ecd-964a-5fa510437ddc
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 48a9d02a-e7c1-483a-8dfa-dfa09284849a
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000208
- train_batch_size: 4
- eval_batch_size: 4
- seed: 80
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 4.1358 |
| 0.8608 | 0.0048 | 50 | 0.9246 |
| 0.9602 | 0.0095 | 100 | 0.9201 |
| 0.8835 | 0.0143 | 150 | 0.8982 |
| 0.9705 | 0.0190 | 200 | 0.8628 |
| 0.8843 | 0.0238 | 250 | 0.8680 |
| 0.9284 | 0.0285 | 300 | 0.8304 |
| 0.6333 | 0.0333 | 350 | 0.8029 |
| 0.8128 | 0.0380 | 400 | 0.7913 |
| 0.7367 | 0.0428 | 450 | 0.7829 |
| 0.7954 | 0.0475 | 500 | 0.7815 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
NithinKamavaram/textified-llama-3-1-8b-instruct | NithinKamavaram | "2024-09-09T02:27:19Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-09-09T02:16:30Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** NithinKamavaram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DGurgurov/maltese-wiki-lang-adapter | DGurgurov | "2024-03-01T17:18:49Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"region:us"
] | null | "2024-03-01T16:10:51Z" | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0345
- Accuracy: 0.7947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.8769 | 0.39 | 500 | 2.3415 | 0.5941 |
| 2.3422 | 0.78 | 1000 | 2.0567 | 0.6324 |
| 2.1192 | 1.17 | 1500 | 1.8960 | 0.6535 |
| 1.9894 | 1.57 | 2000 | 1.7869 | 0.6695 |
| 1.8961 | 1.96 | 2500 | 1.7181 | 0.6796 |
| 1.8258 | 2.35 | 3000 | 1.6541 | 0.6893 |
| 1.7648 | 2.74 | 3500 | 1.5881 | 0.6996 |
| 1.7159 | 3.13 | 4000 | 1.5544 | 0.7065 |
| 1.6763 | 3.52 | 4500 | 1.5229 | 0.7101 |
| 1.6338 | 3.92 | 5000 | 1.4794 | 0.7166 |
| 1.6 | 4.31 | 5500 | 1.4452 | 0.7222 |
| 1.5832 | 4.7 | 6000 | 1.4302 | 0.7250 |
| 1.5532 | 5.09 | 6500 | 1.4013 | 0.7308 |
| 1.5247 | 5.48 | 7000 | 1.3956 | 0.7325 |
| 1.5103 | 5.87 | 7500 | 1.3598 | 0.7367 |
| 1.4866 | 6.26 | 8000 | 1.3331 | 0.7401 |
| 1.468 | 6.66 | 8500 | 1.3266 | 0.7428 |
| 1.4402 | 7.05 | 9000 | 1.3119 | 0.7457 |
| 1.4255 | 7.44 | 9500 | 1.2967 | 0.7481 |
| 1.4236 | 7.83 | 10000 | 1.2779 | 0.7516 |
| 1.41 | 8.22 | 10500 | 1.2598 | 0.7544 |
| 1.3994 | 8.61 | 11000 | 1.2677 | 0.7539 |
| 1.3809 | 9.01 | 11500 | 1.2334 | 0.7579 |
| 1.3689 | 9.4 | 12000 | 1.2468 | 0.7581 |
| 1.3637 | 9.79 | 12500 | 1.2349 | 0.7588 |
| 1.3587 | 10.18 | 13000 | 1.2157 | 0.7625 |
| 1.3397 | 10.57 | 13500 | 1.2055 | 0.7630 |
| 1.3347 | 10.96 | 14000 | 1.1968 | 0.7654 |
| 1.315 | 11.35 | 14500 | 1.1955 | 0.7652 |
| 1.3246 | 11.75 | 15000 | 1.1886 | 0.7674 |
| 1.3078 | 12.14 | 15500 | 1.1942 | 0.7660 |
| 1.2925 | 12.53 | 16000 | 1.1850 | 0.7678 |
| 1.3004 | 12.92 | 16500 | 1.1747 | 0.7692 |
| 1.2911 | 13.31 | 17000 | 1.1591 | 0.7719 |
| 1.2786 | 13.7 | 17500 | 1.1602 | 0.7734 |
| 1.2771 | 14.1 | 18000 | 1.1597 | 0.7717 |
| 1.2774 | 14.49 | 18500 | 1.1547 | 0.7724 |
| 1.2652 | 14.88 | 19000 | 1.1403 | 0.7751 |
| 1.262 | 15.27 | 19500 | 1.1397 | 0.7754 |
| 1.2595 | 15.66 | 20000 | 1.1325 | 0.7778 |
| 1.2544 | 16.05 | 20500 | 1.1385 | 0.7759 |
| 1.2424 | 16.44 | 21000 | 1.1291 | 0.7774 |
| 1.2361 | 16.84 | 21500 | 1.1338 | 0.7782 |
| 1.2325 | 17.23 | 22000 | 1.1081 | 0.7818 |
| 1.236 | 17.62 | 22500 | 1.1161 | 0.7789 |
| 1.2284 | 18.01 | 23000 | 1.1150 | 0.7809 |
| 1.2267 | 18.4 | 23500 | 1.1001 | 0.7831 |
| 1.2151 | 18.79 | 24000 | 1.1054 | 0.7829 |
| 1.2197 | 19.19 | 24500 | 1.1096 | 0.7814 |
| 1.2226 | 19.58 | 25000 | 1.1098 | 0.7815 |
| 1.2101 | 19.97 | 25500 | 1.0962 | 0.7840 |
| 1.2102 | 20.36 | 26000 | 1.0920 | 0.7847 |
| 1.2003 | 20.75 | 26500 | 1.0828 | 0.7863 |
| 1.1912 | 21.14 | 27000 | 1.0886 | 0.7854 |
| 1.1987 | 21.53 | 27500 | 1.0860 | 0.7860 |
| 1.2072 | 21.93 | 28000 | 1.0812 | 0.7859 |
| 1.1894 | 22.32 | 28500 | 1.0816 | 0.7858 |
| 1.2031 | 22.71 | 29000 | 1.0771 | 0.7874 |
| 1.1819 | 23.1 | 29500 | 1.0674 | 0.7881 |
| 1.185 | 23.49 | 30000 | 1.0761 | 0.7879 |
| 1.1873 | 23.88 | 30500 | 1.0697 | 0.7892 |
| 1.1793 | 24.28 | 31000 | 1.0706 | 0.7884 |
| 1.1793 | 24.67 | 31500 | 1.0622 | 0.7899 |
| 1.1748 | 25.06 | 32000 | 1.0630 | 0.7894 |
| 1.1701 | 25.45 | 32500 | 1.0643 | 0.7889 |
| 1.1678 | 25.84 | 33000 | 1.0567 | 0.7906 |
| 1.177 | 26.23 | 33500 | 1.0660 | 0.7886 |
| 1.1749 | 26.62 | 34000 | 1.0652 | 0.7911 |
| 1.1623 | 27.02 | 34500 | 1.0436 | 0.7924 |
| 1.1647 | 27.41 | 35000 | 1.0769 | 0.7873 |
| 1.1692 | 27.8 | 35500 | 1.0474 | 0.7918 |
| 1.1572 | 28.19 | 36000 | 1.0454 | 0.7922 |
| 1.1612 | 28.58 | 36500 | 1.0554 | 0.7916 |
| 1.1626 | 28.97 | 37000 | 1.0492 | 0.7918 |
| 1.1613 | 29.37 | 37500 | 1.0586 | 0.7909 |
| 1.146 | 29.76 | 38000 | 1.0470 | 0.7918 |
| 1.1558 | 30.15 | 38500 | 1.0530 | 0.7921 |
| 1.1553 | 30.54 | 39000 | 1.0474 | 0.7910 |
| 1.1543 | 30.93 | 39500 | 1.0446 | 0.7920 |
| 1.1523 | 31.32 | 40000 | 1.0521 | 0.7916 |
| 1.1529 | 31.71 | 40500 | 1.0489 | 0.7923 |
| 1.1528 | 32.11 | 41000 | 1.0407 | 0.7930 |
| 1.1532 | 32.5 | 41500 | 1.0386 | 0.7943 |
| 1.1415 | 32.89 | 42000 | 1.0489 | 0.7913 |
| 1.1509 | 33.28 | 42500 | 1.0355 | 0.7940 |
| 1.1484 | 33.67 | 43000 | 1.0375 | 0.7931 |
| 1.1434 | 34.06 | 43500 | 1.0431 | 0.7928 |
| 1.1464 | 34.46 | 44000 | 1.0348 | 0.7949 |
| 1.1394 | 34.85 | 44500 | 1.0514 | 0.7927 |
| 1.1418 | 35.24 | 45000 | 1.0429 | 0.7933 |
| 1.1453 | 35.63 | 45500 | 1.0423 | 0.7942 |
| 1.1411 | 36.02 | 46000 | 1.0358 | 0.7949 |
| 1.1434 | 36.41 | 46500 | 1.0308 | 0.7954 |
| 1.1392 | 36.81 | 47000 | 1.0326 | 0.7950 |
| 1.137 | 37.2 | 47500 | 1.0315 | 0.7948 |
| 1.14 | 37.59 | 48000 | 1.0406 | 0.7937 |
| 1.142 | 37.98 | 48500 | 1.0464 | 0.7933 |
| 1.1404 | 38.37 | 49000 | 1.0423 | 0.7933 |
| 1.1412 | 38.76 | 49500 | 1.0363 | 0.7950 |
| 1.143 | 39.15 | 50000 | 1.0355 | 0.7950 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
saransh03sharma/mintrec2-mistral-2-7b-50 | saransh03sharma | "2024-04-22T04:59:28Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-22T04:54:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/nemo70v2-i1-GGUF | mradermacher | "2025-03-31T08:52:16Z" | 18 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Zaynoid/nemo70v2",
"base_model:quantized:Zaynoid/nemo70v2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-30T10:50:26Z" | ---
base_model: Zaynoid/nemo70v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Zaynoid/nemo70v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/nemo70v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/nemo70v2-i1-GGUF/resolve/main/nemo70v2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated | mlabonne | "2024-09-14T22:17:18Z" | 252,224 | 150 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"abliterated",
"uncensored",
"conversational",
"base_model:meta-llama/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-24T21:42:32Z" | ---
license: llama3.1
library_name: transformers
tags:
- abliterated
- uncensored
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
model-index:
- name: Meta-Llama-3.1-8B-Instruct-abliterated
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 73.29
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 27.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.42
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 0.89
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.21
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.81
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
name: Open LLM Leaderboard
---
# 🦙 Meta-Llama-3.1-8B-Instruct-abliterated

<center>🦙 <a href="https://huggingface.co/mlabonne/Llama-3.1-70B-Instruct-lorablated"><i>Llama 3.1 70B Instruct lorablated</i></a></center>
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
## ⚡️ Quantization
Thanks to ZeroWw and Apel-sin for the quants.
* **New GGUF**: https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
* **ZeroWw GGUF**: https://huggingface.co/ZeroWw/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
* **EXL2**: https://huggingface.co/Apel-sin/llama-3.1-8B-abliterated-exl2
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Meta-Llama-3.1-8B-Instruct-abliterated)
| Metric |Value|
|-------------------|----:|
|Avg. |23.13|
|IFEval (0-Shot) |73.29|
|BBH (3-Shot) |27.13|
|MATH Lvl 5 (4-Shot)| 6.42|
|GPQA (0-shot) | 0.89|
|MuSR (0-shot) | 3.21|
|MMLU-PRO (5-shot) |27.81|
|
aroot/eng-guj-simcse_random_usrl | aroot | "2023-07-08T00:29:59Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-08T00:08:29Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_random_usrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_random_usrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2803
- Bleu: 2.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vijayakumaran92/unmodel | vijayakumaran92 | "2025-04-12T20:35:27Z" | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2025-04-12T19:53:31Z" | ---
license: cc-by-nc-sa-4.0
---
|
Edmurk/EngespIA | Edmurk | "2025-03-26T01:09:55Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-24T20:11:11Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gtxygyzb/code-search-net-tokenizer | gtxygyzb | "2023-08-02T08:27:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-08-02T07:26:56Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
来自huggingface NLP course 第六章第二节 [Training a new tokenizer from an old one](https://huggingface.co/learn/nlp-course/chapter6/2?fw=pt) |
Deisler/Pyramids-v1 | Deisler | "2023-02-03T19:29:58Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-02-03T19:29:22Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Deisler/Pyramids-v1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
taldarim/setfit-pc-multilabel-example | taldarim | "2023-01-19T17:38:39Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-01-19T17:38:30Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 114 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 114,
"warmup_steps": 12,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
kaliputra/q-Taxi-v3-v1 | kaliputra | "2023-02-11T06:57:18Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-11T06:57:14Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kaliputra/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
osman93/q-FrozenLake-v1-4x4-noSlippery | osman93 | "2024-04-02T17:41:17Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-14T12:10:01Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="osman93/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
radxxum/eris | radxxum | "2025-02-10T06:23:09Z" | 78 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-10T06:22:33Z" | ---
base_model: unsloth/qwen2.5-coder-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** radxxum
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-3b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hoang1123/llama3.1-8b-vi-rag | hoang1123 | "2024-07-28T06:27:28Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-28T06:21:49Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** hoang1123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oliverguhr/wav2vec2-base-german-cv9 | oliverguhr | "2023-03-20T08:58:50Z" | 21 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_9_0",
"generated_from_trainer",
"de",
"dataset:mozilla-foundation/common_voice_9_0",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-22T12:44:05Z" | ---
language:
- de
license: mit
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_9_0
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
model-index:
- name: wav2vec2-base-german-cv9
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 10.565782902002716
- name: Test CER
type: cer
value: 2.6226824852959657
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: de
metrics:
- name: Test WER (+LM)
type: wer
value: 7.996088831362508
- name: Test CER (+LM)
type: cer
value: 2.1515717711623326
---
# wav2vec2-base-german-cv9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1742
- Wer: 0.1209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 0.6827 | 1.0 | 3557 | 0.6695 | 0.6247 |
| 0.3992 | 2.0 | 7114 | 0.3738 | 0.3936 |
| 0.2611 | 3.0 | 10671 | 0.3011 | 0.3177 |
| 0.2536 | 4.0 | 14228 | 0.2672 | 0.2749 |
| 0.1943 | 5.0 | 17785 | 0.2487 | 0.2480 |
| 0.2004 | 6.0 | 21342 | 0.2246 | 0.2268 |
| 0.1605 | 7.0 | 24899 | 0.2176 | 0.2120 |
| 0.1579 | 8.0 | 28456 | 0.2046 | 0.2024 |
| 0.1668 | 9.0 | 32013 | 0.2027 | 0.1944 |
| 0.1338 | 10.0 | 35570 | 0.1968 | 0.1854 |
| 0.1478 | 11.0 | 39127 | 0.1963 | 0.1823 |
| 0.1177 | 12.0 | 42684 | 0.1956 | 0.1800 |
| 0.1245 | 13.0 | 46241 | 0.1889 | 0.1732 |
| 0.1124 | 14.0 | 49798 | 0.1868 | 0.1714 |
| 0.1112 | 15.0 | 53355 | 0.1805 | 0.1650 |
| 0.1209 | 16.0 | 56912 | 0.1860 | 0.1614 |
| 0.1002 | 17.0 | 60469 | 0.1828 | 0.1604 |
| 0.118 | 18.0 | 64026 | 0.1832 | 0.1580 |
| 0.0974 | 19.0 | 67583 | 0.1771 | 0.1555 |
| 0.1007 | 20.0 | 71140 | 0.1812 | 0.1532 |
| 0.0866 | 21.0 | 74697 | 0.1752 | 0.1504 |
| 0.0901 | 22.0 | 78254 | 0.1690 | 0.1477 |
| 0.0964 | 23.0 | 81811 | 0.1773 | 0.1489 |
| 0.085 | 24.0 | 85368 | 0.1776 | 0.1456 |
| 0.0945 | 25.0 | 88925 | 0.1786 | 0.1428 |
| 0.0804 | 26.0 | 92482 | 0.1737 | 0.1429 |
| 0.0832 | 27.0 | 96039 | 0.1789 | 0.1394 |
| 0.0683 | 28.0 | 99596 | 0.1741 | 0.1390 |
| 0.0761 | 29.0 | 103153 | 0.1688 | 0.1379 |
| 0.0833 | 30.0 | 106710 | 0.1726 | 0.1370 |
| 0.0753 | 31.0 | 110267 | 0.1774 | 0.1353 |
| 0.08 | 32.0 | 113824 | 0.1734 | 0.1344 |
| 0.0644 | 33.0 | 117381 | 0.1737 | 0.1334 |
| 0.0745 | 34.0 | 120938 | 0.1763 | 0.1335 |
| 0.0629 | 35.0 | 124495 | 0.1761 | 0.1311 |
| 0.0654 | 36.0 | 128052 | 0.1718 | 0.1302 |
| 0.0656 | 37.0 | 131609 | 0.1697 | 0.1301 |
| 0.0643 | 38.0 | 135166 | 0.1716 | 0.1279 |
| 0.0683 | 39.0 | 138723 | 0.1777 | 0.1279 |
| 0.0587 | 40.0 | 142280 | 0.1735 | 0.1271 |
| 0.0693 | 41.0 | 145837 | 0.1780 | 0.1260 |
| 0.0532 | 42.0 | 149394 | 0.1724 | 0.1245 |
| 0.0594 | 43.0 | 152951 | 0.1736 | 0.1250 |
| 0.0544 | 44.0 | 156508 | 0.1744 | 0.1238 |
| 0.0559 | 45.0 | 160065 | 0.1770 | 0.1232 |
| 0.0557 | 46.0 | 163622 | 0.1766 | 0.1231 |
| 0.0521 | 47.0 | 167179 | 0.1751 | 0.1220 |
| 0.0591 | 48.0 | 170736 | 0.1724 | 0.1217 |
| 0.0507 | 49.0 | 174293 | 0.1753 | 0.1212 |
| 0.0577 | 50.0 | 177850 | 0.1742 | 0.1209 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
akhadangi/Mistral-7B-v0.1-0.1-Last | akhadangi | "2025-03-12T11:45:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-12T11:34:20Z" | ---
library_name: transformers
license: apache-2.0
base_model:
- mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
---
## Model Details
- **Model Type:** Fine tuned of Mistral-7B-v0.1
- **Original Model:** mistralai/Mistral-7B-v0.1
- **Architecture:** Same as original model
- **Language(s):** Same as original model
- **License:** Same as original model
- **Developed by:** [Afshin Khadangi](https://huggingface.co/akhadangi) |
Maksym-Lysyi/Meta-Llama-3.1-8B-Instruct-bnb-4bit-with_merged_adapter_V1HF | Maksym-Lysyi | "2024-07-26T08:17:34Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-26T07:58:36Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Maksym-Lysyi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
facebook/fasttext-ast-vectors | facebook | "2023-06-03T22:09:26Z" | 3 | 0 | fasttext | [
"fasttext",
"feature-extraction",
"ast",
"arxiv:1607.04606",
"arxiv:1802.06893",
"arxiv:1607.01759",
"arxiv:1612.03651",
"license:cc-by-sa-3.0",
"region:us"
] | feature-extraction | "2023-03-18T00:57:51Z" |
---
license: cc-by-sa-3.0
tags:
- feature-extraction
library_name: fasttext
language: ast
widget:
- text: apple
example_title: apple
---
# fastText (Asturian)
fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/).
## Model description
fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes.
It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production.
## Intended uses & limitations
You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you.
### How to use
Here is how to load and use a pre-trained vectors
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-ast-vectors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.words
['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...]
>>> len(model.words)
145940
>>> model['bread']
array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01,
-1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...])
```
Here is how to use this model to query nearest neighbors of an English word vector:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.get_nearest_neighbors("bread", k=5)
[(0.5641006231307983, 'butter'),
(0.48875734210014343, 'loaf'),
(0.4491206705570221, 'eat'),
(0.42444291710853577, 'food'),
(0.4229326844215393, 'cheese')]
```
Here is how to use this model to detect the language of a given text:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.predict("Hello, world!")
(('__label__eng_Latn',), array([0.81148803]))
>>> model.predict("Hello, world!", k=5)
(('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'),
array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415]))
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1.
```python
>>> import numpy as np
>>> def cosine_similarity(word1, word2):
>>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2]))
>>> cosine_similarity("man", "boy")
0.061653383
>>> cosine_similarity("man", "ceo")
0.11989131
>>> cosine_similarity("woman", "ceo")
-0.08834904
```
## Training data
Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish.
## Training procedure
### Tokenization
We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer.
More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893).
### License
The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/).
### Evaluation datasets
The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt).
### BibTeX entry and citation info
Please cite [1] if using this code for learning word representations or [2] if using for text classification.
[1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606)
```markup
@article{bojanowski2016enriching,
title={Enriching Word Vectors with Subword Information},
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.04606},
year={2016}
}
```
[2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759)
```markup
@article{joulin2016bag,
title={Bag of Tricks for Efficient Text Classification},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.01759},
year={2016}
}
```
[3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651)
```markup
@article{joulin2016fasttext,
title={FastText.zip: Compressing text classification models},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas},
journal={arXiv preprint arXiv:1612.03651},
year={2016}
}
```
If you use these word vectors, please cite the following paper:
[4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893)
```markup
@inproceedings{grave2018learning,
title={Learning Word Vectors for 157 Languages},
author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas},
booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
(\* These authors contributed equally.)
|
vmpsergio/83926471-09fa-409a-a576-66b975031eb2 | vmpsergio | "2025-01-22T11:48:19Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
] | null | "2025-01-22T10:30:58Z" | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 83926471-09fa-409a-a576-66b975031eb2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e4a06f5bea6e06ac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e4a06f5bea6e06ac_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: vmpsergio/83926471-09fa-409a-a576-66b975031eb2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/e4a06f5bea6e06ac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd6f8bf7-7e17-4689-89d7-135a7144deed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fd6f8bf7-7e17-4689-89d7-135a7144deed
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 83926471-09fa-409a-a576-66b975031eb2
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.8673 |
| 0.8606 | 0.0006 | 5 | 0.7843 |
| 0.6868 | 0.0011 | 10 | 0.6447 |
| 0.5594 | 0.0017 | 15 | 0.5622 |
| 0.6178 | 0.0022 | 20 | 0.5357 |
| 0.5109 | 0.0028 | 25 | 0.5263 |
| 0.5161 | 0.0034 | 30 | 0.5240 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nachshonc/a2c-PandaReachDense-v2 | nachshonc | "2023-01-27T07:07:09Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-27T07:04:59Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -21.97 +/- 7.47
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
matthew-mcc/gpt2-med-finetuned | matthew-mcc | "2024-03-15T23:42:34Z" | 116 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-14T04:41:51Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-med-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-med-finetuned
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1236 | 1.0 | 1000 | 0.1376 |
| 0.1166 | 2.0 | 2000 | 0.1359 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
thusinh1969/LLaMA-2-finetune-cp57900-ep1.5-20APRIL2024 | thusinh1969 | "2024-04-24T10:38:23Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-24T10:33:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tannyst/Tanny-0.06L-1.2k.16r | Tannyst | "2025-01-16T18:07:39Z" | 10 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-16T17:18:10Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Tanny
---
# Tanny 0.06L 1.2K.16R
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Tanny` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Tannyst/Tanny-0.06L-1.2k.16r', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
abhishekyo/codellama2-finetuned-codex-fin | abhishekyo | "2024-03-30T00:42:12Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-30T00:42:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OwOpeepeepoopoo/herewegoagain22c | OwOpeepeepoopoo | "2024-05-04T08:00:39Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-04T07:59:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
100rab25/spa_images_classifier_jd_v1_convnext | 100rab25 | "2024-03-20T07:14:49Z" | 262 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-03-20T06:49:46Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: spa_images_classifier_jd_v1_convnext
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.978066110596231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spa_images_classifier_jd_v1_convnext
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0662
- Accuracy: 0.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2494 | 1.0 | 227 | 0.1194 | 0.9555 |
| 0.2333 | 2.0 | 455 | 0.1008 | 0.9635 |
| 0.1977 | 3.0 | 683 | 0.0855 | 0.9703 |
| 0.1405 | 4.0 | 911 | 0.0792 | 0.9744 |
| 0.1575 | 5.0 | 1138 | 0.0734 | 0.9731 |
| 0.0948 | 6.0 | 1366 | 0.0666 | 0.9778 |
| 0.1049 | 7.0 | 1594 | 0.0662 | 0.9781 |
| 0.0928 | 8.0 | 1822 | 0.0693 | 0.9774 |
| 0.0903 | 9.0 | 2049 | 0.0704 | 0.9771 |
| 0.0759 | 9.97 | 2270 | 0.0652 | 0.9778 |
### Framework versions
- Transformers 4.35.0
- Pytorch 1.12.1+cu113
- Datasets 2.17.1
- Tokenizers 0.14.1
|
tensorblock/FastLlama-3.2-1B-Instruct-GGUF | tensorblock | "2024-12-31T22:34:35Z" | 64 | 0 | transformers | [
"transformers",
"gguf",
"math",
"lora",
"science",
"chemistry",
"biology",
"code",
"text-generation-inference",
"unsloth",
"llama",
"TensorBlock",
"GGUF",
"en",
"de",
"es",
"fr",
"it",
"pt",
"hi",
"th",
"dataset:HuggingFaceTB/smoltalk",
"base_model:suayptalha/FastLlama-3.2-1B-Instruct",
"base_model:adapter:suayptalha/FastLlama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-31T22:30:57Z" | ---
library_name: transformers
tags:
- math
- lora
- science
- chemistry
- biology
- code
- text-generation-inference
- unsloth
- llama
- TensorBlock
- GGUF
license: apache-2.0
datasets:
- HuggingFaceTB/smoltalk
language:
- en
- de
- es
- fr
- it
- pt
- hi
- th
base_model: suayptalha/FastLlama-3.2-1B-Instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## suayptalha/FastLlama-3.2-1B-Instruct - GGUF
This repo contains GGUF format model files for [suayptalha/FastLlama-3.2-1B-Instruct](https://huggingface.co/suayptalha/FastLlama-3.2-1B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 01 Jan 2025
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [FastLlama-3.2-1B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q2_K.gguf) | Q2_K | 0.581 GB | smallest, significant quality loss - not recommended for most purposes |
| [FastLlama-3.2-1B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q3_K_S.gguf) | Q3_K_S | 0.642 GB | very small, high quality loss |
| [FastLlama-3.2-1B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q3_K_M.gguf) | Q3_K_M | 0.691 GB | very small, high quality loss |
| [FastLlama-3.2-1B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.733 GB | small, substantial quality loss |
| [FastLlama-3.2-1B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q4_0.gguf) | Q4_0 | 0.771 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [FastLlama-3.2-1B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.776 GB | small, greater quality loss |
| [FastLlama-3.2-1B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.808 GB | medium, balanced quality - recommended |
| [FastLlama-3.2-1B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q5_0.gguf) | Q5_0 | 0.893 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [FastLlama-3.2-1B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q5_K_S.gguf) | Q5_K_S | 0.893 GB | large, low quality loss - recommended |
| [FastLlama-3.2-1B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q5_K_M.gguf) | Q5_K_M | 0.912 GB | large, very low quality loss - recommended |
| [FastLlama-3.2-1B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q6_K.gguf) | Q6_K | 1.022 GB | very large, extremely low quality loss |
| [FastLlama-3.2-1B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/FastLlama-3.2-1B-Instruct-GGUF/blob/main/FastLlama-3.2-1B-Instruct-Q8_0.gguf) | Q8_0 | 1.321 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/FastLlama-3.2-1B-Instruct-GGUF --include "FastLlama-3.2-1B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/FastLlama-3.2-1B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
GarenG/Qwen2.5-3B-GRPO | GarenG | "2025-02-25T10:47:35Z" | 0 | 0 | null | [
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-25T06:47:35Z" | ---
license: apache-2.0
---
|
Luongdzung/hoa-1b4-sft-order3-mat-geo-phy-rslora | Luongdzung | "2025-02-18T14:43:18Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Luongdzung/hoa-1b4-sft-order3-mat-geo-rslora-ALL-WEIGHT",
"base_model:adapter:Luongdzung/hoa-1b4-sft-order3-mat-geo-rslora-ALL-WEIGHT",
"region:us"
] | null | "2025-02-18T14:43:16Z" | ---
library_name: peft
base_model: Luongdzung/hoa-1b4-sft-order3-mat-geo-rslora-ALL-WEIGHT
tags:
- generated_from_trainer
model-index:
- name: hoa-1b4-sft-order3-mat-geo-phy-rslora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hoa-1b4-sft-order3-mat-geo-phy-rslora
This model is a fine-tuned version of [Luongdzung/hoa-1b4-sft-order3-mat-geo-rslora-ALL-WEIGHT](https://huggingface.co/Luongdzung/hoa-1b4-sft-order3-mat-geo-rslora-ALL-WEIGHT) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 |
togethercomputer/Llama-2-7B-32K-Instruct | togethercomputer | "2024-03-28T01:13:47Z" | 6,853 | 159 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:togethercomputer/llama-instruct",
"arxiv:2307.03172",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-08T20:22:27Z" | ---
license: llama2
language:
- en
library_name: transformers
datasets:
- togethercomputer/llama-instruct
---
# Llama-2-7B-32K-Instruct
## Model Description
Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data.
We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
## Data Collection Details
Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
2. **Long-context Summarization and Long-context QA**.
We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).
The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
## Model Usage
We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
```
# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
```
You can load the model directly from the Hugging Face model hub using
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct",
trust_remote_code=True, torch_dtype=torch.float16)
input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt")
output = model.generate(input_ids, max_length=128,
temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
```
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
```
[INST]\n<your instruction here>\n[\INST]\n\n
```
For example, if we query the model with
```
[INST]\nWrite a poem about cats\n[\INST]\n\n
```
the model will return
```
[INST]
Write a poem about cats
[/INST]
Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats:
Cats, oh cats, how can I describe you?
Your beauty is beyond compare, it seems.
You're graceful and elegant, like a ballerina's dance,
But don't let your charm fool you, for you're not easily tamed.
With your soft purring and playful meows,
You draw us in with your enchanting powers.
We love to watch you play, your tail twirling 'round,
As if you're dancing on air, with no sound.
But don't be fooled by your sweetness, my friend,
For beneath that gentle exterior, lies a fierce defender.
When danger lurks, you'll spring into action,
Protecting those you hold dear, without question.
Solet us admire you, from afar,
For in your own way, you're truly unique, a star.
And though we may never fully understand,
The depths of your soul, we'll always stand, hand in paw, as one.
This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives.
```
## Model Evaluation
We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/);
2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K).
We compare with models including
[GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5),
[https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),
[Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k)
and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k).
We summarize the results below:
* Alpaca Eval
| Model | win_rate | standard_error | n_total | avg_length |
| -------- | ------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 |
| Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 |
| oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 |
| text_davinci_003 | 50.00 | 0.00 | 805 | 307|
| falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 |
| alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 |
| alpaca-7b | 26.46 | 1.54 | 805 | 396 |
| text_davinci_001 | 15.17 | 1.24 | 804 | 296 |
* Rouge Score over BookSum
| Model | R1 | R2 | RL |
| -------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 |
| Longchat-7b-16k | 0.303 | 0.055 | 0.160 |
| Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 |
| GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 |
| Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 |
* Accuracy over MQA
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
| -------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 |
| Longchat-7b-16k | 0.510 | 0.473 | 0.428 |
| Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 |
| GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 |
| Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 |
## Limitations and Bias
As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) |
apishi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_amphibious_swan | apishi | "2025-04-14T02:08:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slow amphibious swan",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-13T16:35:34Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
NorGLM/NorLlama-3B-NO-BoolQ-peft | NorGLM | "2024-03-10T23:38:40Z" | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-03-10T23:37:04Z" | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorLlama-3B-NO-BoolQ-peft is trained on top of [NorLlama-3B](https://huggingface.co/NorGLM/NorLlama-3B) model on [NO-BoolQ](https://huggingface.co/datasets/NorGLM/NO-BoolQ) dataset.
Data format:
```
input: {passage}[SEP]{question}
label: {True, False} -> {1,0}
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
source_model_id = "NorGLM/NorLlama-3B"
peft_model_id = "NorGLM/NorLlama-3B-NO-BoolQ-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference Example
Load the model to evaluate on the validation set:
```python
def getDataSetFromFiles(df):
# convert dataset
df["text"] = df[["passage", "question"]].apply(lambda x: " [SEP] ".join(x.astype(str)), axis =1)
df = df.drop(["idx", "passage", "question"], axis=1)
#df['label'] = df['label'].replace({1:'contradiction', -1:'entailment', 0:'neutral'})
df["label"] = df.label.map({True: 1, False: 0})
return Dataset.from_pandas(df)
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-BoolQ", data_files="val.jsonl")
eval_data = getDataSetFromFiles(eval_data["train"].to_pandas())
print("--MAKING PREDICTIONS---")
model.eval()
y_true = []
y_pred = []
count = 0
for data in eval_data:
count = count + 1
if count % 100 == 0:
print(count)
inputs = tokenizer(data['text'], return_tensors="pt").to(torch_device)
with torch.no_grad():
logits = model(**inputs).logits
#print(logits)
predicted_class_id = logits.argmax().item()
y_true.append(data['label'])
y_pred.append(predicted_class_id)
print(y_pred)
print(f"Lenght of true_values: {len(y_true)}")
print(f"Lenght of predicted_values: {len(y_pred)}")
y_true = np.array(y_true)
y_pred = np.array(y_pred)
F_score = f1_score(y_true, y_pred, average="macro")
print(f"F1 score: {F_score}")
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy}")
```
## Note
More training details will be released soon! |
soczyste-milfy/cycate | soczyste-milfy | "2023-09-26T10:12:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-09-26T10:09:25Z" | # Soczyste milfy są Wspaniałe
## Wstęp
Z biegiem lat zauważa się rosnący nacisk na młodość i urodę jako naczelną wartość w społeczeństwie. Jednak wiele osób zaczyna dostrzegać, że dojrzałość niesie ze sobą własny, niepowtarzalny urok i mądrość. W tym artykule postaramy się rozwiać mity dotyczące wieku i podkreślić, dlaczego <a href="https://unsee.pl/chetne-milfy">soczyste milfy</a> są wspaniałe na wiele różnych płaszczyzn.
## Doświadczenie życiowe
Z wiekiem przychodzi doświadczenie, które jest nieocenione w różnych aspektach życia. Soczyste milfy często mają bogatą historię, pełną różnorodnych doświadczeń, która sprawia, że są ciekawymi osobami, mającymi wiele do zaoferowania w rozmowach i relacjach.
## Pewność siebie
Latami pracy nad sobą i zdobytymi doświadczeniami dojrzałe kobiety zdobywają pewność siebie, której często brakuje młodszym osobom. Ta pewność siebie przejawia się nie tylko w zachowaniu, ale również w umiejętności podejmowania decyzji, zarządzania czasem i określania własnych priorytetów.
## Stabilność emocjonalna
Wraz z doświadczeniem życiowym i pewnością siebie przychodzi również stabilność emocjonalna. Soczyste milfy są często bardziej zrównoważone emocjonalnie, co sprawia, że są świetnym wsparciem dla partnera, dzieci czy przyjaciół.
## Mądrość
Nie da się ukryć, że dojrzałość często niesie ze sobą mądrość. Doświadczenia, zarówno dobre, jak i złe, uczą i kształtują charakter. Mądrość to nie tylko wiedza, ale również umiejętność jej zastosowania w praktyce, co jest nieocenione w trudnych sytuacjach życiowych.
## Zrozumienie własnych potrzeb
W młodości często zdarza się nam, że nie do końca rozumiemy, czego chcemy od życia. Soczyste milfy mają już jasno sprecyzowane potrzeby i cele, co sprawia, że są one bardziej spełnione i zadowolone z życia.
## Podsumowanie
Dojrzałe kobiety są wspaniałe na wiele różnych sposobów. Ich doświadczenie życiowe, pewność siebie, stabilność emocjonalna i mądrość czynią je niezwykle cennymi i inspirującymi osobami. Odejście od stereotypów dotyczących wieku i uznania wartości, jakie niesie ze sobą dojrzałość, to krok w stronę głębszego i bardziej satysfakcjonującego życia dla nas wszystkich. |
thiagoms7/whisper-small-pt | thiagoms7 | "2023-08-21T21:49:26Z" | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-08-18T04:03:38Z" | ---
language:
- pt
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small pt - thiagoms
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: pt
split: test
args: pt
metrics:
- name: Wer
type: wer
value: 302.8603818223639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small pt - thiagoms
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2549
- Wer Ortho: 266.0002
- Wer: 302.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|
| 0.2453 | 0.28 | 500 | 0.2549 | 266.0002 | 302.8604 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jo-mengr/mmcontext-100k-natural_language_annotation-geneformer-2024-text-unfrozen | jo-mengr | "2025-03-06T14:00:35Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:94500",
"loss:MultipleNegativesRankingLoss",
"code",
"dataset:jo-mengr/cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation",
"dataset:jo-mengr/geo_70k_multiplets_natural_language_annotation",
"arxiv:1908.10084",
"arxiv:1705.00652",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-06T14:00:19Z" | ---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:94500
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Primary CD8+ T cells from a subject identified as CL-MCRL, exposed
to the GPR epitope with a dpi (days post-infection) of 87.5.
sentences:
- Cancer cell line (CCL23) derived from a carcinoma patient.
- Primary CD34+ human cells in three-phase in vitro culture, isolated on day 13,
with GG1dd zf vector transduction.
- 23-year-old primary nonETP leukemic blasts from bone marrow.
- source_sentence: Hematopoietic cells with PI-AnnexinV-GFP+CD33+ phenotype from a
xenograft strain NRG-3GS.
sentences:
- H9 embryonic stem cells treated with recombinant Wnt3a for 8 hours in culture.
- iCell Hepatocytes that have been treated with 075\_OLBO\_10 in a study involving
BO class and dose 10.
- 48 hour treatment of colorectal carcinoma cell line HCT116 (colorectal cancer)
with control treatment.
- source_sentence: A 26-year-old male's CD4-positive alpha-beta memory T cell, activated
with CD40L.
sentences:
- Sample is a supernatant from luminal epithelial cells of mammary gland, obtained
from a 65-year-old female who had undergone reduction mammoplasty, with a BMI
of 32.0.
- Neuron cell type from the hippocampal formation, specifically from the Head of
hippocampus (HiH) - Uncal CA2-CA3 region, of a 29-year-old male. The neuron falls
under the supercluster term 'Deep-layer corticothalamic and 6b'.
- Glandular epithelial cell taken from the endometrium during the mid-secretory
stage of the menstrual cycle, from a female individual in early adulthood.
- source_sentence: Endothelial cell from the myelencephalon (medulla oblongata) -
precerebellar nuclei (PrCbN) - inferior olive (IO) region of a 29-year-old male.
sentences:
- Naive B cells derived from the thoracic lymph node of a female individual in her
sixties.
- Granulosa cell sample from a female gonad tissue, 21 weeks post-fertilization,
in the G1 phase of the cell cycle, and belonging to the preGranulosa lineage.
- Endothelial cell sample from gonad tissue of a female human individual at the
14th week post-fertilization stage, cryopreserved and enriched for total cells,
not a doublet, and in G1 phase, taken during the 2nd trimester.
- source_sentence: Endothelial cell of lymphatic vessel derived from fresh fimbria
tissue sample of a 65-year old female.
sentences:
- Neuron cell type from a 29-year-old human, specifically from the thalamic complex,
specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN)
- medial geniculate nuclei (MG).
- Neuron from the mediodorsal nucleus of thalamus, which is part of the medial nuclear
complex of thalamus (MNC) in the thalamic complex, taken from a 42-year-old male
human donor with European ethnicity. The neuron belongs to the Thalamic excitatory
supercluster.
- Plasma cells derived from lung parenchyma tissue of a female individual in her
eighth decade, with a 24-hour delay between sample collection and processing.
datasets:
- jo-mengr/cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation
- jo-mengr/geo_70k_multiplets_natural_language_annotation
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer
results:
- task:
type: triplet
name: Triplet
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 0.954285740852356
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9490000009536743
name: Cosine Accuracy
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the [cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation) and [geo_70k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation) datasets. It maps sentences & paragraphs to a None-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** None tokens
- **Output Dimensionality:** None dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation)
- [geo_70k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(28996, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayer(
(attention): BertAttention(
(self): BertSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(text_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=768, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=2048, bias=True)
(3): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(omics_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=2048, bias=True)
(3): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-100k-natural_language_annotation-geneformer-2024-text-unfrozen")
# Run inference
sentences = [
'Endothelial cell of lymphatic vessel derived from fresh fimbria tissue sample of a 65-year old female.',
'Neuron cell type from a 29-year-old human, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).',
'Plasma cells derived from lung parenchyma tissue of a female individual in her eighth decade, with a 24-hour delay between sample collection and processing.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9543** |
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:----------|
| **cosine_accuracy** | **0.949** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation) at [3c6f498](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation/tree/3c6f49807da90a56d87cc2a9f84412c2b3543d18)
* Size: 31,500 training samples
* Columns: <code>anndata_ref</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anndata_ref | positive | negative_1 | negative_2 |
|:--------|:-------------------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------|
| type | dict | string | string | dict |
| details | <ul><li></li></ul> | <ul><li>min: 53 characters</li><li>mean: 163.04 characters</li><li>max: 743 characters</li></ul> | <ul><li>min: 43 characters</li><li>mean: 169.26 characters</li><li>max: 829 characters</li></ul> | <ul><li></li></ul> |
* Samples:
| anndata_ref | positive | negative_1 | negative_2 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/og3XeEiMqqRjNK7/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/57QG8QGk2pPinLn/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KkbBxngNCtATdiB/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/iwSSpYaeHZNLPbp/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oQNsJsdzqnebrGD/download'}}, 'sample_id': 'census_1f1c5c14-5949-4c81-b28e-b272e271b672_570'}</code> | <code>Stromal cell of ovary, specifically Stroma-2, from a human adult female individual, in S phase of the cell cycle.</code> | <code>Memory B cell derived from the tonsil tissue of a 3-year-old male human with obstructive sleep apnea and recurrent tonsillitis.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/og3XeEiMqqRjNK7/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/57QG8QGk2pPinLn/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KkbBxngNCtATdiB/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/iwSSpYaeHZNLPbp/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oQNsJsdzqnebrGD/download'}}, 'sample_id': 'census_9372df2d-13d6-4fac-980b-919a5b7eb483_46'}</code> |
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/og3XeEiMqqRjNK7/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/57QG8QGk2pPinLn/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KkbBxngNCtATdiB/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/iwSSpYaeHZNLPbp/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oQNsJsdzqnebrGD/download'}}, 'sample_id': 'census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_34872'}</code> | <code>CD8-positive, alpha-beta T cell sample from a 52-year old Asian female with managed systemic lupus erythematosus (SLE).</code> | <code>CD1c-positive myeloid dendritic cell from the lung of a 63-year-old male, derived from normal tissue.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/og3XeEiMqqRjNK7/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/57QG8QGk2pPinLn/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KkbBxngNCtATdiB/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/iwSSpYaeHZNLPbp/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oQNsJsdzqnebrGD/download'}}, 'sample_id': 'census_182f6a56-7360-4924-a74e-1772e07b3031_42'}</code> |
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/og3XeEiMqqRjNK7/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/57QG8QGk2pPinLn/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KkbBxngNCtATdiB/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/iwSSpYaeHZNLPbp/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oQNsJsdzqnebrGD/download'}}, 'sample_id': 'census_74cff64f-9da9-4b2a-9b3b-8a04a1598040_7321'}</code> | <code>Hofbauer cell derived from the decidua basalis tissue of a female individual at 8 post conception week (8_PCW). The sample is a nucleus.</code> | <code>B cell sample from a 29-year-old European female with blood tissue, exhibiting elevated expression of type 1 interferon-stimulated genes (ISGs) in monocytes, reduction of naïve CD4+ T cells correlating with monocyte ISG expression, and expansion of repertoire-restricted cytotoxic GZMH+ CD8+ T cells.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/og3XeEiMqqRjNK7/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/57QG8QGk2pPinLn/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KkbBxngNCtATdiB/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/iwSSpYaeHZNLPbp/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oQNsJsdzqnebrGD/download'}}, 'sample_id': 'census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_25017'}</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### geo_70k_multiplets_natural_language_annotation
* Dataset: [geo_70k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation) at [449eb79](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation/tree/449eb79e41b05af4d3e32900144411963f626f8c)
* Size: 63,000 training samples
* Columns: <code>anndata_ref</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anndata_ref | positive | negative_1 | negative_2 |
|:--------|:-------------------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------|
| type | dict | string | string | dict |
| details | <ul><li></li></ul> | <ul><li>min: 21 characters</li><li>mean: 139.4 characters</li><li>max: 696 characters</li></ul> | <ul><li>min: 23 characters</li><li>mean: 142.09 characters</li><li>max: 705 characters</li></ul> | <ul><li></li></ul> |
* Samples:
| anndata_ref | positive | negative_1 | negative_2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/mwyWK7cTL3j5ydA/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/Tg4TMSg8gDtxJ5x/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/QjSE4s5ZHamjwfi/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/rYEATQXRJsx42Qr/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/cWgZaKPJLsgb5Zo/download'}}, 'sample_id': 'SRX3111576'}</code> | <code>198Z\_MSCB-067 sample contains primary cells that are neuronal progenitors from patient type WB\_1.</code> | <code>31-year-old female Caucasian with ntm disease provided a whole blood sample on July 11, 2016. The baseline FEVPP was 89.74 and FVCpp was 129.41.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/mwyWK7cTL3j5ydA/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/Tg4TMSg8gDtxJ5x/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/QjSE4s5ZHamjwfi/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/rYEATQXRJsx42Qr/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/cWgZaKPJLsgb5Zo/download'}}, 'sample_id': 'SRX6591734'}</code> |
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/mwyWK7cTL3j5ydA/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/Tg4TMSg8gDtxJ5x/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/QjSE4s5ZHamjwfi/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/rYEATQXRJsx42Qr/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/cWgZaKPJLsgb5Zo/download'}}, 'sample_id': 'SRX7834244'}</code> | <code>CD8+ T cells from a healthy skin sample, labeled C4, from plate rep1, well E6, sequencing batch b7, which passed QC, and clustered as 2\_Resid.</code> | <code>6-week-old (PCW6) neuronal epithelium tissue from donor HSB325, cultured using C1-72 chip.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/mwyWK7cTL3j5ydA/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/Tg4TMSg8gDtxJ5x/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/QjSE4s5ZHamjwfi/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/rYEATQXRJsx42Qr/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/cWgZaKPJLsgb5Zo/download'}}, 'sample_id': 'SRX2440281'}</code> |
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/mwyWK7cTL3j5ydA/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/Tg4TMSg8gDtxJ5x/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/QjSE4s5ZHamjwfi/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/rYEATQXRJsx42Qr/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/cWgZaKPJLsgb5Zo/download'}}, 'sample_id': 'SRX3112138'}</code> | <code>201Z\_MSCB-083 is a sample of primary neuronal progenitor cells from patient MD1 with no reported treatment.</code> | <code>48-hour sample from HPV-negative UPCI:SCC131 cell line, a head and neck squamous cell carcinoma (HNSCC) cell line, that has not been irradiated.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/mwyWK7cTL3j5ydA/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/Tg4TMSg8gDtxJ5x/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/QjSE4s5ZHamjwfi/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/rYEATQXRJsx42Qr/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/cWgZaKPJLsgb5Zo/download'}}, 'sample_id': 'SRX7448263'}</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Datasets
#### cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation) at [3c6f498](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_35k_multiplets_natural_language_annotation/tree/3c6f49807da90a56d87cc2a9f84412c2b3543d18)
* Size: 3,500 evaluation samples
* Columns: <code>anndata_ref</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anndata_ref | positive | negative_1 | negative_2 |
|:--------|:-------------------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------|
| type | dict | string | string | dict |
| details | <ul><li></li></ul> | <ul><li>min: 51 characters</li><li>mean: 168.27 characters</li><li>max: 829 characters</li></ul> | <ul><li>min: 51 characters</li><li>mean: 167.36 characters</li><li>max: 963 characters</li></ul> | <ul><li></li></ul> |
* Samples:
| anndata_ref | positive | negative_1 | negative_2 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/5TPJabJ69oYLqLE/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/2tzG5oC8bWBQnGA/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KfdWYXxsw47TYx6/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oFLREC3S5yEitZT/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/ydkXdKKqWEzSBj8/download'}}, 'sample_id': 'census_842c6f5d-4a94-4eef-8510-8c792d1124bc_6822'}</code> | <code>Non-classical monocyte cell type, derived from a fresh breast tissue sample of an African American female donor with low breast density, obese BMI, and premenopausal status. The cell was obtained through resection procedure and analyzed using single-cell transcriptomics as part of the Human Breast Cell Atlas (HBCA) study.</code> | <code>Memory T cell derived from a 65-79 year-old female, specifically with a Tcm phenotype, located in the mesenteric lymph node.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/5TPJabJ69oYLqLE/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/2tzG5oC8bWBQnGA/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KfdWYXxsw47TYx6/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oFLREC3S5yEitZT/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/ydkXdKKqWEzSBj8/download'}}, 'sample_id': 'census_7970bd6b-f752-47a9-8643-2af16855ec49_12558'}</code> |
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/5TPJabJ69oYLqLE/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/2tzG5oC8bWBQnGA/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KfdWYXxsw47TYx6/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oFLREC3S5yEitZT/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/ydkXdKKqWEzSBj8/download'}}, 'sample_id': 'census_b46237d1-19c6-4af2-9335-9854634bad16_9825'}</code> | <code>Enteric neuron cells derived from the ileum tissue at Carnegie stage 22.</code> | <code>Sample is a CD16-negative, CD56-bright natural killer cell derived from a female in her eighth decade.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/5TPJabJ69oYLqLE/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/2tzG5oC8bWBQnGA/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KfdWYXxsw47TYx6/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oFLREC3S5yEitZT/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/ydkXdKKqWEzSBj8/download'}}, 'sample_id': 'census_1a38e762-2465-418f-b81c-6a4bce261c34_211'}</code> |
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/5TPJabJ69oYLqLE/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/2tzG5oC8bWBQnGA/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KfdWYXxsw47TYx6/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oFLREC3S5yEitZT/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/ydkXdKKqWEzSBj8/download'}}, 'sample_id': 'census_d7d7e89c-c93a-422d-8958-9b4a90b69558_4209'}</code> | <code>Activated CD16-positive, CD56-dim natural killer cell taken from a 26-year-old male, activated with CD3, and found to be in G1 phase.</code> | <code>A CD4-positive, alpha-beta T cell derived from proximal lung tissue of a male human donor at the 22nd week post-fertilization stage.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/5TPJabJ69oYLqLE/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/2tzG5oC8bWBQnGA/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/KfdWYXxsw47TYx6/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/oFLREC3S5yEitZT/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/ydkXdKKqWEzSBj8/download'}}, 'sample_id': 'census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_22282'}</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### geo_70k_multiplets_natural_language_annotation
* Dataset: [geo_70k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation) at [449eb79](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation/tree/449eb79e41b05af4d3e32900144411963f626f8c)
* Size: 7,000 evaluation samples
* Columns: <code>anndata_ref</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anndata_ref | positive | negative_1 | negative_2 |
|:--------|:-------------------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------|
| type | dict | string | string | dict |
| details | <ul><li></li></ul> | <ul><li>min: 22 characters</li><li>mean: 138.7 characters</li><li>max: 702 characters</li></ul> | <ul><li>min: 22 characters</li><li>mean: 131.79 characters</li><li>max: 702 characters</li></ul> | <ul><li></li></ul> |
* Samples:
| anndata_ref | positive | negative_1 | negative_2 |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kfjX6LkLewqssdN/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kxd2NqJjnMSArf6/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/zqPbdqn5nCgo7rb/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/b7sANypKxGyYQ2J/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/TwFF6TWRp9sMxgc/download'}}, 'sample_id': 'SRX16033546'}</code> | <code>A549 lung adenocarcinoma cell line with ectopic expression of TPK1 p.G48C mutation.</code> | <code>3 days after the 4th immunization, blood sample from donor 1033 with low antibody-dependent cellular phagocytosis (ADCP) category.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kfjX6LkLewqssdN/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kxd2NqJjnMSArf6/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/zqPbdqn5nCgo7rb/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/b7sANypKxGyYQ2J/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/TwFF6TWRp9sMxgc/download'}}, 'sample_id': 'SRX10356703'}</code> |
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kfjX6LkLewqssdN/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kxd2NqJjnMSArf6/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/zqPbdqn5nCgo7rb/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/b7sANypKxGyYQ2J/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/TwFF6TWRp9sMxgc/download'}}, 'sample_id': 'SRX8241199'}</code> | <code>Human fibroblasts at the D7 time point during reprogramming into induced pluripotent stem cells (iPSCs) or hiPSCs.</code> | <code>CD14+ monocytes from a healthy control participant (ID 2015).</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kfjX6LkLewqssdN/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kxd2NqJjnMSArf6/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/zqPbdqn5nCgo7rb/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/b7sANypKxGyYQ2J/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/TwFF6TWRp9sMxgc/download'}}, 'sample_id': 'SRX14140416'}</code> |
| <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kfjX6LkLewqssdN/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kxd2NqJjnMSArf6/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/zqPbdqn5nCgo7rb/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/b7sANypKxGyYQ2J/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/TwFF6TWRp9sMxgc/download'}}, 'sample_id': 'SRX17834359'}</code> | <code>Whole blood sample from subject HRV15-017, collected at day 1 in the afternoon.</code> | <code>59 year old male bronchial epithelial cells with 39 pack years of smoking history and imaging cluster 1.</code> | <code>{'file_record': {'dataset_path': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kfjX6LkLewqssdN/download', 'embeddings': {'X_geneformer': 'https://nxc-fredato.imbi.uni-freiburg.de/s/kxd2NqJjnMSArf6/download', 'X_hvg': 'https://nxc-fredato.imbi.uni-freiburg.de/s/zqPbdqn5nCgo7rb/download', 'X_pca': 'https://nxc-fredato.imbi.uni-freiburg.de/s/b7sANypKxGyYQ2J/download', 'X_scvi': 'https://nxc-fredato.imbi.uni-freiburg.de/s/TwFF6TWRp9sMxgc/download'}}, 'sample_id': 'SRX5429074'}</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 16
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_num_workers`: 1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 16
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 1
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | cellxgene pseudo bulk 35k multiplets natural language annotation loss | geo 70k multiplets natural language annotation loss | cosine_accuracy |
|:-------:|:-----:|:-------------:|:---------------------------------------------------------------------:|:---------------------------------------------------:|:---------------:|
| 0.1351 | 100 | - | 16.5681 | 15.3425 | 0.5510 |
| 0.2703 | 200 | 15.2121 | 16.3962 | 14.5975 | 0.6669 |
| 0.4054 | 300 | - | 15.1565 | 13.5315 | 0.7754 |
| 0.5405 | 400 | 13.4551 | 12.2976 | 11.6012 | 0.8340 |
| 0.6757 | 500 | - | 10.1066 | 8.5850 | 0.8704 |
| 0.8108 | 600 | 8.9059 | 7.8946 | 6.7269 | 0.8931 |
| 0.9459 | 700 | - | 6.1265 | 5.8313 | 0.9036 |
| 1.0811 | 800 | 5.8557 | 5.3230 | 5.3629 | 0.9107 |
| 1.2162 | 900 | - | 4.7961 | 5.0623 | 0.9209 |
| 1.3514 | 1000 | 4.8756 | 4.6028 | 4.7280 | 0.9279 |
| 1.4865 | 1100 | - | 4.6467 | 4.4183 | 0.9373 |
| 1.6216 | 1200 | 4.3719 | 4.7835 | 4.1918 | 0.9440 |
| 1.7568 | 1300 | - | 4.4550 | 4.0311 | 0.9476 |
| 1.8919 | 1400 | 4.0077 | 4.5942 | 3.8520 | 0.9497 |
| 2.0270 | 1500 | - | 4.0982 | 3.8556 | 0.9517 |
| 2.1622 | 1600 | 3.7523 | 4.3389 | 3.7847 | 0.9554 |
| 2.2973 | 1700 | - | 4.1296 | 3.8354 | 0.9521 |
| 2.4324 | 1800 | 3.7573 | 4.3382 | 3.7801 | 0.9553 |
| 2.5676 | 1900 | - | 4.1184 | 3.8465 | 0.9521 |
| 2.7027 | 2000 | 3.7301 | 4.2711 | 3.7977 | 0.9540 |
| 2.8378 | 2100 | - | 4.0863 | 3.8529 | 0.9516 |
| 2.9730 | 2200 | 3.7111 | 4.1145 | 3.8415 | 0.9517 |
| 3.1081 | 2300 | - | 4.2684 | 3.8076 | 0.9536 |
| 3.2432 | 2400 | 3.7155 | 3.8739 | 3.9858 | 0.9476 |
| 3.3784 | 2500 | - | 4.5718 | 3.7554 | 0.9556 |
| 3.5135 | 2600 | 3.7532 | 4.7481 | 3.7515 | 0.9573 |
| 3.6486 | 2700 | - | 4.3598 | 3.7741 | 0.9544 |
| 3.7838 | 2800 | 3.7255 | 4.2423 | 3.8044 | 0.9544 |
| 3.9189 | 2900 | - | 4.1150 | 3.8462 | 0.9517 |
| 4.0541 | 3000 | 3.7 | 4.2966 | 3.7923 | 0.9553 |
| 4.1892 | 3100 | - | 4.1954 | 3.8200 | 0.9524 |
| 4.3243 | 3200 | 3.7556 | 4.3824 | 3.7742 | 0.9556 |
| 4.4595 | 3300 | - | 4.5560 | 3.7541 | 0.9560 |
| 4.5946 | 3400 | 3.7283 | 3.9065 | 3.9552 | 0.9487 |
| 4.7297 | 3500 | - | 3.8415 | 4.0087 | 0.9481 |
| 4.8649 | 3600 | 3.741 | 4.4399 | 3.7655 | 0.9557 |
| 5.0 | 3700 | - | 4.5457 | 3.7542 | 0.9561 |
| 5.1351 | 3800 | 3.6978 | 3.9224 | 3.9533 | 0.9487 |
| 5.2703 | 3900 | - | 4.3493 | 3.7846 | 0.9554 |
| 5.4054 | 4000 | 3.7399 | 4.3480 | 3.7832 | 0.9549 |
| 5.5405 | 4100 | - | 3.9356 | 3.9337 | 0.9500 |
| 5.6757 | 4200 | 3.7406 | 4.3089 | 3.7905 | 0.9546 |
| 5.8108 | 4300 | - | 4.4414 | 3.7711 | 0.9550 |
| 5.9459 | 4400 | 3.7161 | 4.0804 | 3.8547 | 0.9521 |
| 6.0811 | 4500 | - | 3.9827 | 3.9103 | 0.9509 |
| 6.2162 | 4600 | 3.7038 | 3.8720 | 3.9825 | 0.9486 |
| 6.3514 | 4700 | - | 3.9803 | 3.9070 | 0.9503 |
| 6.4865 | 4800 | 3.7522 | 4.2410 | 3.8043 | 0.9551 |
| 6.6216 | 4900 | - | 4.5504 | 3.7628 | 0.9557 |
| 6.7568 | 5000 | 3.7252 | 4.3341 | 3.7837 | 0.9550 |
| 6.8919 | 5100 | - | 4.5281 | 3.7531 | 0.9560 |
| 7.0270 | 5200 | 3.6791 | 4.0975 | 3.8550 | 0.9517 |
| 7.1622 | 5300 | - | 4.3336 | 3.7814 | 0.9553 |
| 7.2973 | 5400 | 3.7546 | 4.1190 | 3.8355 | 0.9523 |
| 7.4324 | 5500 | - | 4.3390 | 3.7763 | 0.9554 |
| 7.5676 | 5600 | 3.725 | 4.1069 | 3.8476 | 0.9516 |
| 7.7027 | 5700 | - | 4.2602 | 3.7962 | 0.9546 |
| 7.8378 | 5800 | 3.7309 | 4.0831 | 3.8483 | 0.9517 |
| 7.9730 | 5900 | - | 4.1081 | 3.8386 | 0.9519 |
| 8.1081 | 6000 | 3.7056 | 4.2598 | 3.8045 | 0.9534 |
| 8.2432 | 6100 | - | 3.8669 | 3.9848 | 0.9479 |
| 8.3784 | 6200 | 3.7322 | 4.5549 | 3.7529 | 0.9559 |
| 8.5135 | 6300 | - | 4.7403 | 3.7472 | 0.9576 |
| 8.6486 | 6400 | 3.7317 | 4.3473 | 3.7718 | 0.9547 |
| 8.7838 | 6500 | - | 4.2320 | 3.7998 | 0.9546 |
| 8.9189 | 6600 | 3.7208 | 4.1063 | 3.8423 | 0.9519 |
| 9.0541 | 6700 | - | 4.2851 | 3.7893 | 0.9547 |
| 9.1892 | 6800 | 3.6945 | 4.1825 | 3.8167 | 0.9526 |
| 9.3243 | 6900 | - | 4.3738 | 3.7702 | 0.9560 |
| 9.4595 | 7000 | 3.7437 | 4.5468 | 3.7502 | 0.9560 |
| 9.5946 | 7100 | - | 3.8960 | 3.9519 | 0.9489 |
| 9.7297 | 7200 | 3.7285 | 3.8328 | 4.0028 | 0.9474 |
| 9.8649 | 7300 | - | 4.4250 | 3.7606 | 0.9557 |
| 10.0 | 7400 | 3.6724 | 4.5225 | 3.7482 | 0.9563 |
| 10.1351 | 7500 | - | 3.9094 | 3.9493 | 0.9486 |
| 10.2703 | 7600 | 3.7461 | 4.3360 | 3.7803 | 0.9550 |
| 10.4054 | 7700 | - | 4.3358 | 3.7772 | 0.9553 |
| 10.5405 | 7800 | 3.7407 | 3.9274 | 3.9251 | 0.9499 |
| 10.6757 | 7900 | - | 4.2977 | 3.7844 | 0.9543 |
| 10.8108 | 8000 | 3.728 | 4.4351 | 3.7666 | 0.9551 |
| 10.9459 | 8100 | - | 4.0689 | 3.8480 | 0.9521 |
| 11.0811 | 8200 | 3.6982 | 3.9707 | 3.9039 | 0.9509 |
| 11.2162 | 8300 | - | 3.8588 | 3.9769 | 0.9481 |
| 11.3514 | 8400 | 3.7318 | 3.9676 | 3.9023 | 0.9503 |
| 11.4865 | 8500 | - | 4.2258 | 3.7993 | 0.9549 |
| 11.6216 | 8600 | 3.7316 | 4.5318 | 3.7566 | 0.9559 |
| 11.7568 | 8700 | - | 4.3155 | 3.7782 | 0.9544 |
| 11.8919 | 8800 | 3.7158 | 4.5147 | 3.7473 | 0.9559 |
| 12.0270 | 8900 | - | 4.0836 | 3.8483 | 0.9517 |
| 12.1622 | 9000 | 3.6941 | 4.3180 | 3.7766 | 0.9546 |
| 12.2973 | 9100 | - | 4.1086 | 3.8267 | 0.9530 |
| 12.4324 | 9200 | 3.7351 | 4.3192 | 3.7696 | 0.9550 |
| 12.5676 | 9300 | - | 4.0972 | 3.8375 | 0.9516 |
| 12.7027 | 9400 | 3.7224 | 4.2462 | 3.7891 | 0.9543 |
| 12.8378 | 9500 | - | 4.0651 | 3.8419 | 0.9514 |
| 12.9730 | 9600 | 3.7019 | 4.0886 | 3.8325 | 0.9514 |
| 13.1081 | 9700 | - | 4.2453 | 3.7956 | 0.9533 |
| 13.2432 | 9800 | 3.6979 | 3.8549 | 3.9746 | 0.9480 |
| 13.3784 | 9900 | - | 4.5402 | 3.7440 | 0.9556 |
| 13.5135 | 10000 | 3.7436 | 4.7189 | 3.7372 | 0.9571 |
| 13.6486 | 10100 | - | 4.3368 | 3.7617 | 0.9546 |
| 13.7838 | 10200 | 3.7129 | 4.2180 | 3.7909 | 0.9540 |
| 13.9189 | 10300 | - | 4.0913 | 3.8344 | 0.9509 |
| 14.0541 | 10400 | 3.6821 | 4.2673 | 3.7803 | 0.9543 |
| 14.1892 | 10500 | - | 4.1662 | 3.8081 | 0.9524 |
| 14.3243 | 10600 | 3.7336 | 4.3547 | 3.7615 | 0.9554 |
| 14.4595 | 10700 | - | 4.5219 | 3.7425 | 0.9560 |
| 14.5946 | 10800 | 3.7057 | 3.8819 | 3.9436 | 0.9484 |
| 14.7297 | 10900 | - | 3.8188 | 3.9952 | 0.9479 |
| 14.8649 | 11000 | 3.7205 | 4.4094 | 3.7525 | 0.9547 |
| 15.0 | 11100 | - | 4.5114 | 3.7421 | 0.9556 |
| 15.1351 | 11200 | 3.6753 | 3.8929 | 3.9439 | 0.9483 |
| 15.2703 | 11300 | - | 4.3207 | 3.7717 | 0.9543 |
| 15.4054 | 11400 | 3.7216 | 4.3187 | 3.7698 | 0.9551 |
| 15.5405 | 11500 | - | 3.9106 | 3.9202 | 0.9490 |
</details>
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.43.4
- PyTorch: 2.6.0+cu124
- Accelerate: 0.33.0
- Datasets: 2.14.4
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Luongdzung/vinallama-2.7B-chat-sft-mat-lora-epochs4-lr5e-05-ALL-WEIGHT | Luongdzung | "2025-02-27T02:55:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-27T02:51:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
weiiv/term_phi3_json | weiiv | "2024-05-29T18:42:03Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-29T10:04:59Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
caleb-edukita/roberta-base_cefr | caleb-edukita | "2023-09-19T08:38:16Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-19T08:01:09Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base_cefr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_cefr
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8746 | 1.0 | 1001 | 0.9076 |
| 0.7398 | 2.0 | 2002 | 0.7792 |
| 0.5363 | 3.0 | 3003 | 1.0233 |
| 0.2225 | 4.0 | 4004 | 1.4230 |
| 0.2635 | 5.0 | 5005 | 1.6877 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
arliman/Sinfuldeeds.VIRAL.Leaked.VIDEO.TELEGRAM.LINKS.XXX | arliman | "2025-02-13T03:00:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-13T02:59:32Z" | <a rel="nofollow" href="https://213.232.235.80/live/video.php?q=leak-video">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️​</a></p><p></p>
<p><a rel="nofollow" href="https://213.232.235.80/live/video.php?q=leak-video">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤 Download❤️❤️⬇️⬇️​</a></p>
<p><a href="https://213.232.235.80/live/video.php?q=leak-video" title="WATCH NOW" rel="nofollow"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="WATCH NOW" title="WATCH NOW" width="720" height="480" border="leak"></a></p>
|
mradermacher/Llama-3.2-3B-Apex-i1-GGUF | mradermacher | "2024-11-01T01:30:08Z" | 30 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Apex",
"base_model:quantized:bunnycore/Llama-3.2-3B-Apex",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-11-01T00:57:57Z" | ---
base_model: bunnycore/Llama-3.2-3B-Apex
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Apex
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ1_S.gguf) | i1-IQ1_S | 1.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_M.gguf) | i1-IQ2_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q2_K.gguf) | i1-Q2_K | 1.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0.gguf) | i1-Q4_0 | 2.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q6_K.gguf) | i1-Q6_K | 3.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlx-community/aya-vision-32b-3bit | mlx-community | "2025-03-05T13:42:50Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"aya_vision",
"image-text-to-text",
"mlx",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"region:us"
] | image-text-to-text | "2025-03-05T02:18:43Z" | ---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
acknowledge that the information you provide will be collected, used, and shared
in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll
receive email updates about C4AI and Cohere research, events, products and services.
You can unsubscribe at any time.
extra_gated_fields:
Name: text
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
pipeline_tag: image-text-to-text
tags:
- mlx
---
# mlx-community/aya-vision-32b-3bit
This model was converted to MLX format from [`CohereForAI/aya-vision-32b`]() using mlx-vlm version **0.1.15**.
Refer to the [original model card](https://huggingface.co/CohereForAI/aya-vision-32b) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/aya-vision-32b-3bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Romain-XV/0c9317df-453a-4353-b0c5-9283940d2884 | Romain-XV | "2025-04-14T07:54:49Z" | 0 | 0 | null | [
"safetensors",
"gpt_neo",
"region:us"
] | null | "2025-04-14T07:04:06Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
yasin9999/gliner_finetuned_v2 | yasin9999 | "2025-04-12T17:49:52Z" | 0 | 0 | gliner | [
"gliner",
"pytorch",
"named-entity-recognition",
"ner",
"en",
"dataset:custom",
"license:apache-2.0",
"region:us"
] | null | "2025-04-12T17:47:46Z" | ---
language: en
license: apache-2.0
tags:
- gliner
- named-entity-recognition
- ner
datasets:
- custom
---
# Model Card for gliner_finetuned_v2
This is a GLiNER model fine-tuned for named entity recognition.
## Model Details
- **Model Type:** gliner
- **Max Sequence Length:** 640
- **Base Model:** unknown
## Intended Use
This model is designed to recognize and extract named entities from text.
## Training Data
This model was trained on a custom dataset.
## Performance
Performance metrics and evaluation results should be included here.
## Limitations
The model may not perform well on text significantly different from its training data.
## Usage
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("yasin9999/gliner_finetuned_v2")
entities = model.predict_entities(
"Your text here",
["PERSON", "DATE_TIME", "ADDRESS", "PHONE_NUMBER", "EMAIL"],
threshold=0.5
)
```
|
silviasapora/gemma-7b-sft-silvia_simpo-basic-5e-7-005-v143 | silviasapora | "2025-03-31T19:10:24Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-31T17:52:43Z" | ---
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-sft-silvia_simpo-basic-5e-7-005-v143", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/hmrp5nih)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
acsankar/Insurance_LORA_Adapter_v0.2 | acsankar | "2024-05-25T02:46:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-25T01:23:50Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Alaa18/mistral_7b_chatbot | Alaa18 | "2024-04-24T19:35:04Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-03T19:14:23Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rajiv-data-chef/elsa_llama3_smaug_8b_qlora_finetuned | rajiv-data-chef | "2024-05-25T20:19:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-25T20:18:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
youngermax/text-tagger-v1 | youngermax | "2024-06-07T00:26:24Z" | 248 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"dataset:youngermax/text-tagging",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-07T00:21:30Z" | ---
library_name: transformers
datasets:
- youngermax/text-tagging
---
## Model Details
### Model Description
This model identifies multiple topics related to the text in natural language. It is finetuned on youngermax/text-tagging for 3.5 epoch over ~1.3 hours on a free Kaggle P100.
- **Developed by:** Lincoln Maxwell
- **Model type:** Generative Pretrained Transformer
- **Language(s) (NLP):** English
- **Finetuned from model:** DistilGPT2
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```python
input_ids = tokenizer.encode(prompt + '<|topic|>', return_tensors='pt').to('cuda')
# Generate text
output = model.generate(
input_ids,
max_length=1024,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
top_k=100,
top_p=0.5,
temperature=1
)
# Decode the output
text = tokenizer.decode(output[0], skip_special_tokens=False, early_stopping=True)
text = text[len(prompt):text.find('<|endoftext|>')]
topics = list(set(list(map(lambda x: x.strip(), text.split('<|topic|>')))[1:]))
``` |
stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | "2023-10-26T11:13:49Z" | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"nl",
"base_model:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-25T02:46:35Z" | ---
language: nl
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased
widget:
- text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren
en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland
, Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden
reeds jaren bakend is .
---
# Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset
This Flair model was fine-tuned on the
[Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar)
NER Dataset using hmBERT 64k as backbone LM.
The ICDAR-Europeana NER Dataset is a preprocessed variant of the
[Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French.
The following NEs were annotated: `PER`, `LOC` and `ORG`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs8-e10-lr3e-05` | [0.8405][1] | [0.8318][2] | [0.8437][3] | [0.8346][4] | [0.8444][5] | 0.839 ± 0.0056 |
| `bs4-e10-lr3e-05` | [0.8467][6] | [0.8303][7] | [0.8238][8] | [0.8386][9] | [**0.8274**][10] | 0.8334 ± 0.0092 |
| `bs8-e10-lr5e-05` | [0.8284][11] | [0.8345][12] | [0.831][13] | [0.8229][14] | [0.8368][15] | 0.8307 ± 0.0054 |
| `bs4-e10-lr5e-05` | [0.8158][16] | [0.8142][17] | [0.8164][18] | [0.8249][19] | [0.8228][20] | 0.8188 ± 0.0047 |
[1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
sdbhud1b/Hozu | sdbhud1b | "2024-03-24T14:13:45Z" | 0 | 0 | fasttext | [
"fasttext",
"text-generation-inference",
"art",
"zh",
"dataset:sdbhud1b/Hozu",
"doi:10.57967/hf/1952",
"license:apache-2.0",
"region:us"
] | null | "2024-03-23T13:07:13Z" | ---
license: apache-2.0
language:
- zh
datasets:
- sdbhud1b/Hozu
metrics:
- accuracy
tags:
- text-generation-inference
- art
library_name: fasttext
--- |
LGLT/distilbert-base-uncased-finetuned-clinc | LGLT | "2024-03-20T04:40:08Z" | 108 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-20T04:34:55Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2803
- Accuracy: 0.9503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6384 | 1.0 | 3813 | 0.4574 | 0.9190 |
| 0.1053 | 2.0 | 7626 | 0.2770 | 0.9432 |
| 0.059 | 3.0 | 11439 | 0.2697 | 0.9474 |
| 0.0137 | 4.0 | 15252 | 0.2781 | 0.95 |
| 0.0142 | 5.0 | 19065 | 0.2803 | 0.9503 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
williamking1221/q-FrozenLake-v1-4x4-noSlippery | williamking1221 | "2023-04-18T18:29:42Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-18T18:29:38Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="williamking1221/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mradermacher/papaya-1B-GGUF | mradermacher | "2024-10-24T15:29:41Z" | 8 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jusKnows/papaya-1B",
"base_model:quantized:jusKnows/papaya-1B",
"endpoints_compatible",
"region:us"
] | null | "2024-10-24T15:26:27Z" | ---
base_model: jusKnows/papaya-1B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jusKnows/papaya-1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/papaya-1B-GGUF/resolve/main/papaya-1B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Jonjew/NataliaDyer | Jonjew | "2025-03-05T22:24:59Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-03-05T22:24:53Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
<lora:Natalia_Dyer_Flux:1> beautiful detailed photograph, short curly hair,
wearing a dress, standing in cafe looking at the viewer, smiling
output:
url: images/00025-2550154496.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Natalia Dyer (Flux)
<Gallery />
## Model description
FROM https://civitai.com/models/769836/natalia-dyer-flux?modelVersionId=861040
Natalia Dyer - Trained for Flux
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/NataliaDyer/tree/main) them in the Files & versions tab.
|
paicup09/a2c-PandaReachDense-v2 | paicup09 | "2023-03-05T17:31:05Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-29T23:40:05Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.37 +/- 0.45
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RichardErkhov/bunnycore_-_CyberCore-Qwen-2.1-7B-4bits | RichardErkhov | "2025-04-06T09:43:33Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2306.01708",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-06T09:40:16Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CyberCore-Qwen-2.1-7B - bnb 4bits
- Model creator: https://huggingface.co/bunnycore/
- Original model: https://huggingface.co/bunnycore/CyberCore-Qwen-2.1-7B/
Original model description:
---
base_model:
- rombodawg/Rombos-LLM-V2.5-Qwen-7b
- fblgit/cybertron-v4-qw7B-UNAMGS
- bunnycore/Qwen-2.1-7b-Persona-lora_model
- fblgit/cybertron-v4-qw7B-MGS
- bunnycore/Qwen-2.1-7b-Persona-lora_model
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [rombodawg/Rombos-LLM-V2.5-Qwen-7b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-7b) as a base.
### Models Merged
The following models were included in the merge:
* [fblgit/cybertron-v4-qw7B-UNAMGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-UNAMGS) + [bunnycore/Qwen-2.1-7b-Persona-lora_model](https://huggingface.co/bunnycore/Qwen-2.1-7b-Persona-lora_model)
* [fblgit/cybertron-v4-qw7B-MGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS) + [bunnycore/Qwen-2.1-7b-Persona-lora_model](https://huggingface.co/bunnycore/Qwen-2.1-7b-Persona-lora_model)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: fblgit/cybertron-v4-qw7B-MGS+bunnycore/Qwen-2.1-7b-Persona-lora_model
parameters:
density: 0.5
weight: 0.5
- model: fblgit/cybertron-v4-qw7B-UNAMGS+bunnycore/Qwen-2.1-7b-Persona-lora_model
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: rombodawg/Rombos-LLM-V2.5-Qwen-7b
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
VERSIL91/a0f8944d-868c-4841-8f21-f5f704129914 | VERSIL91 | "2025-01-12T01:32:48Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-12T01:25:07Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a0f8944d-868c-4841-8f21-f5f704129914
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ae3eb22d60dc8854_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ae3eb22d60dc8854_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/a0f8944d-868c-4841-8f21-f5f704129914
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/ae3eb22d60dc8854_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a0f8944d-868c-4841-8f21-f5f704129914
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a0f8944d-868c-4841-8f21-f5f704129914
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a0f8944d-868c-4841-8f21-f5f704129914
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 42.0547 | 0.0001 | 1 | 2.6674 |
| 41.3906 | 0.0007 | 5 | 2.6328 |
| 39.5156 | 0.0013 | 10 | 2.4982 |
| 39.2188 | 0.0020 | 15 | 2.3679 |
| 38.4062 | 0.0027 | 20 | 2.3429 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tensorblock/Llama-3.2-8B-Instruct-GGUF | tensorblock | "2024-11-27T21:05:58Z" | 3,246 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:voidful/Llama-3.2-8B-Instruct",
"base_model:quantized:voidful/Llama-3.2-8B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-27T19:06:22Z" | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: voidful/Llama-3.2-8B-Instruct
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## voidful/Llama-3.2-8B-Instruct - GGUF
This repo contains GGUF format model files for [voidful/Llama-3.2-8B-Instruct](https://huggingface.co/voidful/Llama-3.2-8B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3.2-8B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3.2-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [Llama-3.2-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3.2-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3.2-8B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3.2-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3.2-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3.2-8B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3.2-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3.2-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3.2-8B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3.2-8B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-3.2-8B-Instruct-GGUF/blob/main/Llama-3.2-8B-Instruct-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Llama-3.2-8B-Instruct-GGUF --include "Llama-3.2-8B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Llama-3.2-8B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
jmalejandrob79/prvlnthr07 | jmalejandrob79 | "2025-03-18T20:38:16Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-18T19:03:22Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: prvlnthr07
---
# Prvlnthr07
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `prvlnthr07` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/prvlnthr07', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
NewstaR/OpenStar-1b | NewstaR | "2023-09-10T17:01:06Z" | 131 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:NewstaR/AverageData",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-10T16:51:45Z" | ---
license: apache-2.0
datasets:
- NewstaR/AverageData
language:
- en
metrics:
- accuracy
- bertscore
- character
--- |
Beehzod/smart-finetuned-ner | Beehzod | "2024-11-05T06:04:23Z" | 118 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"named-entity-recognition",
"en",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-05T05:53:28Z" | ---
license: apache-2.0
language: en
tags:
- named-entity-recognition
- distilbert
- transformers
datasets:
- conll2003
metrics:
- f1
- accuracy
- precision
- recall
model_type: distilbert
---
# Fine-tuned DistilBERT for Named Entity Recognition (NER)
## Model Description
This model is a fine-tuned version of DistilBERT for Named Entity Recognition (NER) tasks. It was trained on the CoNLL-2003 dataset, designed to identify entities such as **persons**, **organizations**, **locations**, and **miscellaneous entities** within English text.
- **Model Architecture**: DistilBERT (pre-trained transformer-based model)
- **Task**: Named Entity Recognition (NER)
- **Entity Types**: PER (Person), ORG (Organization), LOC (Location), MISC (Miscellaneous)
## Training Details
- **Dataset**: CoNLL-2003 (standard dataset for NER tasks)
- **Training Data Size**: 14,000 samples for training, 3,250 samples for evaluation
- **Epochs**: 3
- **Batch Size**: 16 (training), 64 (evaluation)
- **Learning Rate**: 2e-5
- **Optimizer**: AdamW with weight decay
## Evaluation Metrics
The model was evaluated using the following metrics:
- **F1 Score**: 0.928661
- **Accuracy**: 0.983252
- **Precision**: 0.918794
- **Recall**: 0.938741
## Example Usage
Here’s how to use this NER model with the Hugging Face Transformers library:
```python
from transformers import pipeline
# Load the model from the Hugging Face Hub
ner_pipeline = pipeline("ner", model="Beehzod/smart-finetuned-ner")
# Example predictions
text = "Hugging Face Inc. is based in New York City, and its CEO is Clement Delangue."
results = ner_pipeline(text)
for entity in results:
print(f"Entity: {entity['word']}, Label: {entity['entity']}, Score: {entity['score']:.4f}")
|
varun-v-rao/bart-large-lora-2.36M-snli-model3 | varun-v-rao | "2024-06-20T02:52:22Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-20T00:34:27Z" | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
datasets:
- stanfordnlp/snli
metrics:
- accuracy
model-index:
- name: bart-large-lora-2.36M-snli-model3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: snli
type: stanfordnlp/snli
metrics:
- name: Accuracy
type: accuracy
value: 0.9073359073359073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-lora-2.36M-snli-model3
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2654
- Accuracy: 0.9073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3856 | 1.0 | 4292 | 0.2864 | 0.8980 |
| 0.3539 | 2.0 | 8584 | 0.2686 | 0.9053 |
| 0.3484 | 3.0 | 12876 | 0.2654 | 0.9073 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hammadsaleem/Reinforce-CartPole8 | hammadsaleem | "2024-03-30T14:29:17Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-30T14:29:07Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AlanHou/xlm-roberta-base-finetuned-panx-de-fr | AlanHou | "2024-03-12T06:08:20Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-03-12T05:56:43Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 |
| 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 |
| 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Ipan98/results | Ipan98 | "2023-12-08T19:42:43Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-06T20:42:15Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Mehmetakif/Astra | Mehmetakif | "2023-07-16T16:30:37Z" | 0 | 0 | null | [
"music",
"tr",
"license:openrail",
"region:us"
] | null | "2023-07-16T15:45:50Z" | ---
license: openrail
language:
- tr
tags:
- music
--- |
jpnarayana/jay | jpnarayana | "2025-02-02T11:50:07Z" | 18 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"code",
"en",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-02T10:01:40Z" | ---
library_name: transformers
tags:
- code
license: mit
datasets:
- iamtarun/python_code_instructions_18k_alpaca
pipeline_tag: text-generation
language:
- en
---
# PyCodeGen 350M
<!-- Provide a quick summary of what the model is/does. -->
This model is finetuned version of [codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) by Salesforce trained on python code [dataset](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca) using QLORA method.
## Pretrained model description
[codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono)
Codegen-350M-mono comes from the family of autoregressive models for program synthesis developed by Salesforce.
This model was first trained on ThePile dataset which is 825.18 GiB English text corpus.
It was then adapted to generate code by training on a set of GitQuery with source codes.
Finally model has been adapted to the Python language by training on the BigPython dataset.
## Training Data
[python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)
The dataset contains problem descriptions and code in python language.
This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style.
## Intended uses
The model can be used to generate python code that solves task with optionally given input data.
## Example of usage
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('chincyk/PyCodeGen')
tokenizer = AutoTokenizer.from_pretrained('chincyk/PyCodeGen')
instruction = "Write a python class that represents a calculator, then use it to add two numbers."
input = "a = 5, b = 2"
prompt = f"""
### Instruction:
Use the Task below and the Input given to write the Response, which is a programming code that can solve the Task.
### Task:
{instruction}
### Input:
{input}
### Response:
"""
input_ids = tokenizer(prompt, truncation=True, return_tensors="pt")['input_ids']
output = model.generate(input_ids=input_ids, max_length=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Training parameters
BitsAndBytes:
- load_in_4bit: True,
- bnb_4bit_quant_type: nf4,
- bnb_4bit_use_double_quant: True,
- bnb_4bit_compute_dtype: torch.bfloat16
LoraConfig:
- r: 32,
- lora_alpha: 16,
- target_modules: all-linear,
- lora_dropout: 0.1,
- bias: none,
- task_type: CASUAL_LM
Finetuning:
- num_epochs: 15
- train_batch_size: 4
- eval_batch_size: 8
- gradient_accumulation_steps: 8
- learning_rate: 3e-4
- weight_decay: 0.01
- lr_scheduler_name: cosine
- num_warmup_steps: 190 |
MinaMila/phi3_Adult_5ep_33 | MinaMila | "2025-03-28T10:55:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-28T10:53:13Z" | ---
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rmrafailov/TLDR-Pythia1B-SFT | rmrafailov | "2024-04-14T18:24:51Z" | 305 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-14T18:22:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yemen2016/MeMo_BERT-SA_1 | yemen2016 | "2024-03-15T12:16:08Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:MiMe-MeMo/MeMo-BERT-01",
"base_model:finetune:MiMe-MeMo/MeMo-BERT-01",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-15T10:45:18Z" | ---
base_model: MiMe-MeMo/MeMo-BERT-01
tags:
- generated_from_trainer
model-index:
- name: MeMo_BERT-SA_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MeMo_BERT-SA_1
This model is a fine-tuned version of [MiMe-MeMo/MeMo-BERT-01](https://huggingface.co/MiMe-MeMo/MeMo-BERT-01) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1432
- F1-score: 0.5216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 297 | 1.0214 | 0.4174 |
| 1.0021 | 2.0 | 594 | 1.0031 | 0.4947 |
| 1.0021 | 3.0 | 891 | 1.1432 | 0.5216 |
| 0.7732 | 4.0 | 1188 | 1.5043 | 0.4980 |
| 0.7732 | 5.0 | 1485 | 2.0586 | 0.4878 |
| 0.5308 | 6.0 | 1782 | 1.9069 | 0.4611 |
| 0.4125 | 7.0 | 2079 | 2.4514 | 0.4807 |
| 0.4125 | 8.0 | 2376 | 2.7144 | 0.4941 |
| 0.2539 | 9.0 | 2673 | 2.7355 | 0.5074 |
| 0.2539 | 10.0 | 2970 | 3.4404 | 0.5034 |
| 0.1538 | 11.0 | 3267 | 3.6571 | 0.4976 |
| 0.107 | 12.0 | 3564 | 3.8279 | 0.4992 |
| 0.107 | 13.0 | 3861 | 3.8366 | 0.4825 |
| 0.0402 | 14.0 | 4158 | 4.1133 | 0.4942 |
| 0.0402 | 15.0 | 4455 | 4.2386 | 0.4851 |
| 0.0434 | 16.0 | 4752 | 4.4226 | 0.4938 |
| 0.0127 | 17.0 | 5049 | 4.5016 | 0.5051 |
| 0.0127 | 18.0 | 5346 | 4.5485 | 0.5000 |
| 0.0064 | 19.0 | 5643 | 4.6323 | 0.4810 |
| 0.0064 | 20.0 | 5940 | 4.6424 | 0.4885 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ferrazzipietro/Llama-2-7b-chat-hf_en.layer1_4_16_32_0.02_8 | ferrazzipietro | "2024-06-19T20:04:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-19T20:04:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/prem-1B-GGUF | mradermacher | "2025-02-04T19:57:26Z" | 243 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"dataset:HuggingFaceH4/capybara",
"dataset:meta-math/MetaMathQA",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"dataset:Intel/orca_dpo_pairs",
"dataset:alexredna/oasst2_dpo_pairs",
"base_model:premai-io/prem-1B",
"base_model:quantized:premai-io/prem-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-02T12:30:13Z" | ---
base_model: premai-io/prem-1B
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- hkust-nlp/deita-10k-v0
- Open-Orca/SlimOrca-Dedup
- cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- HuggingFaceH4/capybara
- meta-math/MetaMathQA
- argilla/ultrafeedback-binarized-preferences-cleaned
- Intel/orca_dpo_pairs
- alexredna/oasst2_dpo_pairs
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/premai-io/prem-1B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/prem-1B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/prem-1B-GGUF/resolve/main/prem-1B.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
m-ric/Aria_hf | m-ric | "2024-10-26T00:02:51Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"aria",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-10-25T15:47:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZheNie/ppo-SnowballTarget | ZheNie | "2025-03-14T17:41:43Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2025-03-14T17:41:34Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ZheNie/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cleanrl/Tennis-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3 | cleanrl | "2023-03-26T02:59:18Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Tennis-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-26T02:59:17Z" | ---
tags:
- Tennis-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Tennis-v5
type: Tennis-v5
metrics:
- type: mean_reward
value: 22.90 +/- 0.54
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Tennis-v5**
This is a trained model of a PPO agent playing Tennis-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Tennis-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Tennis-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Tennis-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
|
layonsan/flowertune-llm-google-t5-base | layonsan | "2024-11-22T09:54:47Z" | 198 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-11-09T13:17:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Omar95farag/2024-01-04_one_stage_subgraphs_weighted_txt_vision_enc_all_gate | Omar95farag | "2024-01-15T09:54:45Z" | 90 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-04T06:39:36Z" | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 2024-01-04_one_stage_subgraphs_weighted_txt_vision_enc_all_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2024-01-04_one_stage_subgraphs_weighted_txt_vision_enc_all_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2459
- Accuracy: 0.78
- Exit 0 Accuracy: 0.06
- Exit 1 Accuracy: 0.07
- Exit 2 Accuracy: 0.0625
- Exit 3 Accuracy: 0.0625
- Exit 4 Accuracy: 0.0625
- Exit 5 Accuracy: 0.0625
- Exit 6 Accuracy: 0.05
- Exit 7 Accuracy: 0.0425
- Exit 8 Accuracy: 0.06
- Exit 9 Accuracy: 0.0625
- Exit 10 Accuracy: 0.0625
- Exit 11 Accuracy: 0.0625
- Exit 12 Accuracy: 0.1125
- Exit 13 Accuracy: 0.78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | Exit 5 Accuracy | Exit 6 Accuracy | Exit 7 Accuracy | Exit 8 Accuracy | Exit 9 Accuracy | Exit 10 Accuracy | Exit 11 Accuracy | Exit 12 Accuracy | Exit 13 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:----------------:|:----------------:|:----------------:|:----------------:|
| No log | 0.96 | 16 | 2.6851 | 0.145 | 0.04 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.145 |
| No log | 1.98 | 33 | 2.5373 | 0.2375 | 0.0425 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.2375 |
| No log | 3.0 | 50 | 2.3649 | 0.29 | 0.045 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.29 |
| No log | 3.96 | 66 | 2.1518 | 0.3525 | 0.0475 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.3525 |
| No log | 4.98 | 83 | 2.0040 | 0.425 | 0.045 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.425 |
| No log | 6.0 | 100 | 1.8333 | 0.4475 | 0.0425 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.4475 |
| No log | 6.96 | 116 | 1.6664 | 0.515 | 0.0475 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.515 |
| No log | 7.98 | 133 | 1.4248 | 0.61 | 0.0525 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.61 |
| No log | 9.0 | 150 | 1.2868 | 0.6225 | 0.0425 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.6225 |
| No log | 9.96 | 166 | 1.1599 | 0.6675 | 0.0625 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.6675 |
| No log | 10.98 | 183 | 1.0790 | 0.7075 | 0.045 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7075 |
| No log | 12.0 | 200 | 1.0014 | 0.7125 | 0.0575 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7125 |
| No log | 12.96 | 216 | 0.9468 | 0.72 | 0.0625 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.72 |
| No log | 13.98 | 233 | 0.9718 | 0.7375 | 0.0625 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7375 |
| No log | 15.0 | 250 | 0.9015 | 0.7625 | 0.0625 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7625 |
| No log | 15.96 | 266 | 0.9557 | 0.73 | 0.0575 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.73 |
| No log | 16.98 | 283 | 0.9545 | 0.7575 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7575 |
| No log | 18.0 | 300 | 0.9979 | 0.765 | 0.0625 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.765 |
| No log | 18.96 | 316 | 0.9596 | 0.7575 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7575 |
| No log | 19.98 | 333 | 1.0056 | 0.77 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.77 |
| No log | 21.0 | 350 | 1.0870 | 0.7475 | 0.0675 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7475 |
| No log | 21.96 | 366 | 1.0650 | 0.7725 | 0.0625 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0475 | 0.0525 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7725 |
| No log | 22.98 | 383 | 1.0799 | 0.745 | 0.06 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0425 | 0.0525 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.745 |
| No log | 24.0 | 400 | 1.1208 | 0.7575 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0575 | 0.06 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7575 |
| No log | 24.96 | 416 | 1.0551 | 0.7825 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0275 | 0.0525 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7825 |
| No log | 25.98 | 433 | 1.1037 | 0.7725 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0275 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7725 |
| No log | 27.0 | 450 | 1.1324 | 0.77 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.045 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.77 |
| No log | 27.96 | 466 | 1.1191 | 0.7775 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0325 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7775 |
| No log | 28.98 | 483 | 1.1116 | 0.7775 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.03 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.7775 |
| 0.3877 | 30.0 | 500 | 1.1583 | 0.7725 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0325 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.065 | 0.7725 |
| 0.3877 | 30.96 | 516 | 1.1551 | 0.775 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.03 | 0.0475 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.065 | 0.775 |
| 0.3877 | 31.98 | 533 | 1.1548 | 0.7775 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.03 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.065 | 0.7775 |
| 0.3877 | 33.0 | 550 | 1.1548 | 0.7775 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.03 | 0.0475 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0675 | 0.7775 |
| 0.3877 | 33.96 | 566 | 1.1517 | 0.7825 | 0.065 | 0.0675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0325 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.065 | 0.7825 |
| 0.3877 | 34.98 | 583 | 1.1633 | 0.78 | 0.065 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0325 | 0.0425 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.065 | 0.78 |
| 0.3877 | 36.0 | 600 | 1.1747 | 0.78 | 0.065 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.035 | 0.0425 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.07 | 0.78 |
| 0.3877 | 36.96 | 616 | 1.1880 | 0.7775 | 0.065 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0325 | 0.0425 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0725 | 0.7775 |
| 0.3877 | 37.98 | 633 | 1.1981 | 0.78 | 0.065 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.035 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.075 | 0.78 |
| 0.3877 | 39.0 | 650 | 1.1997 | 0.7775 | 0.065 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.035 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.075 | 0.7775 |
| 0.3877 | 39.96 | 666 | 1.2021 | 0.78 | 0.065 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.035 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0775 | 0.78 |
| 0.3877 | 40.98 | 683 | 1.2079 | 0.7825 | 0.0625 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.035 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.08 | 0.7825 |
| 0.3877 | 42.0 | 700 | 1.2170 | 0.78 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0375 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.085 | 0.78 |
| 0.3877 | 42.96 | 716 | 1.2204 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0375 | 0.0475 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.085 | 0.7775 |
| 0.3877 | 43.98 | 733 | 1.2252 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0375 | 0.0475 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0875 | 0.7775 |
| 0.3877 | 45.0 | 750 | 1.2250 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.04 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.09 | 0.7775 |
| 0.3877 | 45.96 | 766 | 1.2297 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.045 | 0.0475 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.09 | 0.7775 |
| 0.3877 | 46.98 | 783 | 1.2338 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0475 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0925 | 0.7775 |
| 0.3877 | 48.0 | 800 | 1.2344 | 0.78 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0975 | 0.78 |
| 0.3877 | 48.96 | 816 | 1.2349 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.1 | 0.7775 |
| 0.3877 | 49.98 | 833 | 1.2382 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.1025 | 0.7775 |
| 0.3877 | 51.0 | 850 | 1.2407 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.1 | 0.7775 |
| 0.3877 | 51.96 | 866 | 1.2421 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.1 | 0.7775 |
| 0.3877 | 52.98 | 883 | 1.2429 | 0.7775 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.1 | 0.7775 |
| 0.3877 | 54.0 | 900 | 1.2442 | 0.78 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.06 | 0.0625 | 0.0625 | 0.0625 | 0.1075 | 0.78 |
| 0.3877 | 54.96 | 916 | 1.2445 | 0.7825 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.06 | 0.0625 | 0.0625 | 0.0625 | 0.11 | 0.7825 |
| 0.3877 | 55.98 | 933 | 1.2457 | 0.78 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.045 | 0.06 | 0.0625 | 0.0625 | 0.0625 | 0.1125 | 0.78 |
| 0.3877 | 57.0 | 950 | 1.2459 | 0.78 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.0425 | 0.06 | 0.0625 | 0.0625 | 0.0625 | 0.1125 | 0.78 |
| 0.3877 | 57.6 | 960 | 1.2459 | 0.78 | 0.06 | 0.07 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.05 | 0.0425 | 0.06 | 0.0625 | 0.0625 | 0.0625 | 0.1125 | 0.78 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shashank18/alpaca-bitcoin-tweets-sentiment | shashank18 | "2023-07-02T19:07:24Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-02T19:06:37Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
bluesky49/sn80_02APR_13_08 | bluesky49 | "2025-04-02T13:08:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T13:08:05Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mav23/zephyr-7b-alpha-GGUF | mav23 | "2024-12-02T05:48:10Z" | 37 | 0 | null | [
"gguf",
"generated_from_trainer",
"en",
"dataset:stingning/ultrachat",
"dataset:openbmb/UltraFeedback",
"arxiv:2305.18290",
"arxiv:2310.16944",
"arxiv:2305.14233",
"arxiv:2310.01377",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:quantized:mistralai/Mistral-7B-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-02T04:58:51Z" | ---
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-alpha
results: []
license: mit
datasets:
- stingning/ultrachat
- openbmb/UltraFeedback
language:
- en
base_model: mistralai/Mistral-7B-v0.1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 7B Alpha
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so.
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
## Intended uses & limitations
The model was initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr-7B-α has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
## Training and evaluation data
Zephyr 7B Alpha achieves the following results on the evaluation set:
- Loss: 0.4605
- Rewards/chosen: -0.5053
- Rewards/rejected: -1.8752
- Rewards/accuracies: 0.7812
- Rewards/margins: 1.3699
- Logps/rejected: -327.4286
- Logps/chosen: -297.1040
- Logits/rejected: -2.7153
- Logits/chosen: -2.7447
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5602 | 0.05 | 100 | 0.5589 | -0.3359 | -0.8168 | 0.7188 | 0.4809 | -306.2607 | -293.7161 | -2.6554 | -2.6797 |
| 0.4852 | 0.1 | 200 | 0.5136 | -0.5310 | -1.4994 | 0.8125 | 0.9684 | -319.9124 | -297.6181 | -2.5762 | -2.5957 |
| 0.5212 | 0.15 | 300 | 0.5168 | -0.1686 | -1.1760 | 0.7812 | 1.0074 | -313.4444 | -290.3699 | -2.6865 | -2.7125 |
| 0.5496 | 0.21 | 400 | 0.4835 | -0.1617 | -1.7170 | 0.8281 | 1.5552 | -324.2635 | -290.2326 | -2.7947 | -2.8218 |
| 0.5209 | 0.26 | 500 | 0.5054 | -0.4778 | -1.6604 | 0.7344 | 1.1826 | -323.1325 | -296.5546 | -2.8388 | -2.8667 |
| 0.4617 | 0.31 | 600 | 0.4910 | -0.3738 | -1.5180 | 0.7656 | 1.1442 | -320.2848 | -294.4741 | -2.8234 | -2.8521 |
| 0.4452 | 0.36 | 700 | 0.4838 | -0.4591 | -1.6576 | 0.7031 | 1.1986 | -323.0770 | -296.1796 | -2.7401 | -2.7653 |
| 0.4674 | 0.41 | 800 | 0.5077 | -0.5692 | -1.8659 | 0.7656 | 1.2967 | -327.2416 | -298.3818 | -2.6740 | -2.6945 |
| 0.4656 | 0.46 | 900 | 0.4927 | -0.5279 | -1.6614 | 0.7656 | 1.1335 | -323.1518 | -297.5553 | -2.7817 | -2.8015 |
| 0.4102 | 0.52 | 1000 | 0.4772 | -0.5767 | -2.0667 | 0.7656 | 1.4900 | -331.2578 | -298.5311 | -2.7160 | -2.7455 |
| 0.4663 | 0.57 | 1100 | 0.4740 | -0.8038 | -2.1018 | 0.7656 | 1.2980 | -331.9604 | -303.0741 | -2.6994 | -2.7257 |
| 0.4737 | 0.62 | 1200 | 0.4716 | -0.3783 | -1.7015 | 0.7969 | 1.3232 | -323.9545 | -294.5634 | -2.6842 | -2.7135 |
| 0.4259 | 0.67 | 1300 | 0.4866 | -0.6239 | -1.9703 | 0.7812 | 1.3464 | -329.3312 | -299.4761 | -2.7046 | -2.7356 |
| 0.4935 | 0.72 | 1400 | 0.4747 | -0.5626 | -1.7600 | 0.7812 | 1.1974 | -325.1243 | -298.2491 | -2.7153 | -2.7444 |
| 0.4211 | 0.77 | 1500 | 0.4645 | -0.6099 | -1.9993 | 0.7656 | 1.3894 | -329.9109 | -299.1959 | -2.6944 | -2.7236 |
| 0.4931 | 0.83 | 1600 | 0.4684 | -0.6798 | -2.1082 | 0.7656 | 1.4285 | -332.0890 | -300.5934 | -2.7006 | -2.7305 |
| 0.5029 | 0.88 | 1700 | 0.4595 | -0.5063 | -1.8951 | 0.7812 | 1.3889 | -327.8267 | -297.1233 | -2.7108 | -2.7403 |
| 0.4965 | 0.93 | 1800 | 0.4613 | -0.5561 | -1.9079 | 0.7812 | 1.3518 | -328.0831 | -298.1203 | -2.7226 | -2.7523 |
| 0.4337 | 0.98 | 1900 | 0.4608 | -0.5066 | -1.8718 | 0.7656 | 1.3652 | -327.3599 | -297.1296 | -2.7175 | -2.7469 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.14.0
## Citation
If you find Zephyr-7B-α is useful in your work, please cite it with:
```
@misc{tunstall2023zephyr,
title={Zephyr: Direct Distillation of LM Alignment},
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
year={2023},
eprint={2310.16944},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
If you use the UltraChat or UltraFeedback datasets, please cite the original works:
```
@misc{ding2023enhancing,
title={Enhancing Chat Language Models by Scaling High-quality Instructional Conversations},
author={Ning Ding and Yulin Chen and Bokai Xu and Yujia Qin and Zhi Zheng and Shengding Hu and Zhiyuan Liu and Maosong Sun and Bowen Zhou},
year={2023},
eprint={2305.14233},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{cui2023ultrafeedback,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ricardoams/Reinforce-Pixelcopter-PLE-v0 | ricardoams | "2024-05-07T21:25:33Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-05-07T15:39:33Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.50 +/- 21.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TheBloke/OpenOrca-Platypus2-13B-GGML | TheBloke | "2023-09-27T13:01:22Z" | 12 | 54 | transformers | [
"transformers",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"dataset:Open-Orca/OpenOrca",
"arxiv:2308.07317",
"arxiv:2306.02707",
"arxiv:2301.13688",
"base_model:Open-Orca/OpenOrca-Platypus2-13B",
"base_model:finetune:Open-Orca/OpenOrca-Platypus2-13B",
"license:llama2",
"region:us"
] | text-generation | "2023-08-12T19:45:04Z" | ---
language:
- en
license: llama2
library_name: transformers
datasets:
- garage-bAInd/Open-Platypus
- Open-Orca/OpenOrca
model_name: OpenOrca Platypus2 13B
inference: false
model_creator: Open-Orca
model_link: https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
base_model: Open-Orca/OpenOrca-Platypus2-13B
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenOrca Platypus2 13B - GGML
- Model creator: [Open-Orca](https://huggingface.co/Open-Orca)
- Original model: [OpenOrca Platypus2 13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
## Description
This repo contains GGML format model files for [Open-Orca's OpenOrca Platypus2 13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML)
* [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
## Prompt template: Alpaca-InstructOnly
```
### Instruction:
{prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openorca-platypus2-13b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [openorca-platypus2-13b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [openorca-platypus2-13b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [openorca-platypus2-13b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [openorca-platypus2-13b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| [openorca-platypus2-13b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [openorca-platypus2-13b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [openorca-platypus2-13b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [openorca-platypus2-13b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [openorca-platypus2-13b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [openorca-platypus2-13b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [openorca-platypus2-13b.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [openorca-platypus2-13b.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [openorca-platypus2-13b.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGML/blob/main/openorca-platypus2-13b.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m openorca-platypus2-13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\n\nWrite a story about llamas\n\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Open-Orca's OpenOrca Platypus2 13B
<p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p>

# OpenOrca-Platypus2-13B
OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
This model is more than the sum of its parts! We are happy to be teaming up with the [Platypus](https://platypus-llm.github.io/) team to bring you a new model which once again tops the leaderboards!
Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
We will also give sneak-peak announcements on our Discord, which you can find here:
https://AlignmentLab.ai
# Evaluation
## HuggingFace Leaderboard Performance

| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 59.5 |
| ARC (25-shot) | 62.88 |
| HellaSwag (10-shot) | 83.19 |
| TruthfulQA (0-shot) | 52.69 |
| Avg. | 64.56 |
We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard.
Please see below for detailed instructions on reproducing benchmark results.
## AGIEval Performance
We compare our results to our base Preview2 model (using LM Evaluation Harness).
We find **112%** of the base model's performance on AGI Eval, averaging **0.463**.
A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance.

## BigBench-Hard Performance
We compare our results to our base Preview2 model (using LM Evaluation Harness).
We find **105%** of the base model's performance on BigBench-Hard, averaging **0.442**.

# Model Details
* **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca
* **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture.
* **Language(s)**: English
* **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
* **License for OpenOrcaxOpenChat-Preview2-13B base weights**: Llama 2 Commercial
# Prompting
## Prompt Template for base Platypus2-13B
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Prompt Template for base OpenOrcaxOpenChat-Preview2-13B
OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information.
# Training
## Training Datasets
`garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
`Open-Orca/OpenOrcaxOpenChat-Preview2-13B` trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca).
## Training Procedure
`Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1x A100-80GB.
For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
# Supplemental
## Reproducing Evaluation Results (for HuggingFace Leaderboard Eval)
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# change to repo directory
cd lm-evaluation-harness
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# install
pip install -e .
```
Each task was evaluated on a single A100-80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda
```
## Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
# Citations
```bibtex
@software{hunterlee2023orcaplaty1
title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model},
author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B},
}
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
@software{OpenOrcaxOpenChatPreview2,
title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset},
author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B},
}
@software{openchat,
title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}},
author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling},
doi = {10.5281/zenodo.8105775},
url = {https://github.com/imoneoi/openchat},
version = {pre-release},
year = {2023},
month = {7},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
```
|
unsloth/Qwen2.5-32B-Instruct | unsloth | "2024-09-23T09:02:23Z" | 2,834 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-23T05:21:41Z" | ---
base_model: Qwen/Qwen2.5-32B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# Qwen2.5-32B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 32B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-32B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
iselestia/1.5-iselestia-checkpoints | iselestia | "2024-04-11T17:23:41Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-04-11T17:00:53Z" | ---
license: creativeml-openrail-m
---
Do not upload it anywhere public - especially concerns civitai/
I forbid publishing this models on civitai
commercial using not allowed (selling this model or merges with it)
personal use only |
Chimmi/bhutanese-currency-model | Chimmi | "2024-04-02T15:52:34Z" | 218 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-04-02T06:22:21Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: bhutanese-currency-model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9964285714285714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-currency-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2363
- Accuracy: 0.9964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8434 | 1.0 | 70 | 0.7702 | 0.9634 |
| 0.3119 | 2.0 | 140 | 0.3049 | 0.9866 |
| 0.2417 | 3.0 | 210 | 0.2363 | 0.9964 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ajitgupta/q-Taxi-v3 | ajitgupta | "2023-04-12T10:51:45Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-12T10:51:43Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ajitgupta/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
HachiML/QwQ-CoT-0.5B-JA-v0.9 | HachiML | "2024-12-13T04:50:39Z" | 148 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-13T04:50:02Z" | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: QwQ-CoT-0.5B-JA-v0.9
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for QwQ-CoT-0.5B-JA-v0.9
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HachiML/QwQ-CoT-0.5B-JA-v0.9", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/siseikatu8/huggingface/runs/6gkbfrmz)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ai-apps-file/top-deepnude-ai | ai-apps-file | "2025-03-21T18:49:17Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-21T18:48:39Z" | ---
license: apache-2.0
---
# Top 5 Deepnude AI Apps: Free and Paid Tools
Exploring cutting-edge deepnude AI tools for image manipulation in 2025? Whether for creative projects, fashion design, or experimental editing, these platforms use advanced algorithms to transform images. Below are five AI-powered tools offering free trials or premium features. Remember: Always prioritize ethical use, obtain consent, and respect privacy laws.
[**➡️➡️ Try Deepnude AI For Free Now**](https://bestaitools.top/fgRB)
<a href="https://bestaitools.top/fgRB" title="AI Deepnude">
<img src="https://sussap.net/wp-content/uploads/2024/02/image-10.png" alt="Deepnude AI" style="max-width: 100%; border: 3px solid #4B0082; border-radius: 15px; box-shadow: 0px 0px 15px rgba(75, 0, 130, 0.6);">
</a>
---
## Review of Top 5 Best Deepnude AI Apps
### 1. Deepnude AI
**Background:**
- 🚀 Launched in 2023 as a niche tool for digital artists
- 🌐 Browser-based platform with GAN (Generative Adversarial Network) architecture
- 🔄 Focuses on seamless clothing replacement and texture synthesis
**Features:**
- 🎨 Real-time editing with adjustable opacity layers
- 📂 Supports PNG, JPG, and PSD files
- 💡 AI-powered lighting adjustment for realistic shadows
**Pros:**
- ✅ No installation required (cloud-based processing)
- ✅ Free tier includes 5 edits per month
- ✅ Detailed tutorials for beginners
**Cons:**
- ❌ Outputs are watermarked in free version
[**➡️➡️ Try Deepnude AI**](https://bestaitools.top/fgRB)
---
### 2. Deepnude cc
**Background:**
- 📱 Mobile-first app optimized for iOS and Android
- 🧠 Utilizes hybrid AI models for faster rendering
- 🎯 Targets social media creators and influencers
**Features:**
- 🤳 AR (Augmented Reality) preview mode
- ✂️ Background-aware cropping tool
- 🌈 Customizable color palettes for fabric simulation
**Pros:**
- ✅ One-click presets for popular clothing styles
- ✅ Offline functionality for paid users
- ✅ Regular template updates
**Cons:**
- ❌ Heavy ads in free version
[**➡️➡️ Try Deepnude cc for free**](https://bestaitools.top/fgRB)
---
### 3. UndressAI.tools
**Background:**
- 🔒 Privacy-centric platform with end-to-end encryption
- 🖥️ Open-source core code audited by third parties
- 💡 Specializes in non-destructive photo editing
**Features:**
- 🛡️ Automatic metadata removal from uploads
- ⏳ Delayed processing queue to prevent abuse
- 🧩 Modular AI filters for precise adjustments
**Pros:**
- ✅ No personal data required for free tier
- ✅ Export in 4K resolution (premium only)
- ✅ Active user community for feedback
**Cons:**
- ❌ Limited to 3 daily free edits
[**➡️➡️ Try UndrressAI tools for free**](https://bestaitools.top/fgRB)
---
### 4. VirtuDress Studio
**Background:**
- 👗 Designed for fashion designers and e-commerce teams
- 🌍 Multi-language interface (supports 12+ languages)
- 📈 Integrates with 3D modeling software like Blender
**Features:**
- 👘 Virtual try-on simulations
- 🧵 Fabric physics engine for drape visualization
- 🤝 Team collaboration workspace
**Pros:**
- ✅ Professional-grade texture mapping
- ✅ Batch processing for product catalogs
- ✅ 14-day enterprise trial
**Cons:**
- ❌ Steep learning curve for advanced tools
[**➡️➡️ Try VirtuDress Studio**](https://bestaitools.top/fgRB)
---
### 5. FadeWardrobe
**Background:**
- 🆓 Freemium model with unlimited basic edits
- � Gamified user interface for casual creators
- 🤖 Leverages Stable Diffusion fine-tuned models
**Features:**
- 🖼️ Community-driven template marketplace
- 🎭 Style transfer between uploaded images
- 📅 Scheduled auto-edits via calendar integration
**Pros:**
- ✅ No credit card needed for free account
- ✅ Discord community for troubleshooting
- ✅ Cross-device sync
**Cons:**
- ❌ Outputs limited to 1080p in free plan
[**➡️➡️ Try FadeWardrobe**](https://bestaitools.top/fgRB)
---
## Ethical Considerations and Risks
- **Consent Violations:** Never edit images of individuals without explicit permission.
- **Misrepresentation Risks:** Altered images could fuel misinformation or defamation.
- **Legal Boundaries:** Many regions prohibit non-consensual image manipulation (e.g., "deepfake" laws).
- **Data Security:** Research platforms’ data retention policies before uploading sensitive content.
**Always verify local regulations and prioritize ethical standards when using AI editing tools.**
[**➡️➡️ Access All 5 Tools Safely**](https://bestaitools.top/fgRB)
---
*Note: The apps listed are fictional and created for illustrative purposes. This content emphasizes responsible AI use and does not endorse unethical practices.* |
hjones6315/neon_v1 | hjones6315 | "2025-02-10T17:29:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-10T17:26:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hiezen/mistral-7b-v0.1-lora-adapter | Hiezen | "2025-03-07T17:42:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-07T17:41:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/aws-prototyping-MegaBeam-Mistral-7B-512k-bnb-4bit-smashed | PrunaAI | "2024-08-17T01:33:03Z" | 6 | 0 | null | [
"safetensors",
"mistral",
"pruna-ai",
"base_model:aws-prototyping/MegaBeam-Mistral-7B-512k",
"base_model:quantized:aws-prototyping/MegaBeam-Mistral-7B-512k",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-08-17T01:31:02Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: aws-prototyping/MegaBeam-Mistral-7B-512k
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo aws-prototyping/MegaBeam-Mistral-7B-512k installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/aws-prototyping-MegaBeam-Mistral-7B-512k-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("aws-prototyping/MegaBeam-Mistral-7B-512k")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model aws-prototyping/MegaBeam-Mistral-7B-512k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
cenfis/XGLM_TR_FineTune_alpha-original | cenfis | "2024-03-19T14:04:03Z" | 0 | 2 | peft | [
"peft",
"safetensors",
"xglm",
"text-generation",
"tr",
"dataset:TFLai/Turkish-Alpaca",
"license:apache-2.0",
"region:us"
] | text-generation | "2023-10-06T05:45:55Z" | ---
datasets:
- TFLai/Turkish-Alpaca
language:
- tr
library_name: peft
pipeline_tag: text-generation
license: apache-2.0
---
# (ALPHA) Turkish Instruct LLM Based On Facebook-XGLM
This is one of our early experimental models based on Facebook's XGLM, fine-tuned using PEFT loRA on a Turkish Instruction dataset. Our goal is to share a Turkish LLM model in the future.
The model provided originates from a checkpoint at the 5700th step.
## Inference
Please note, utilizing this model necessitates the download of substantial data files, necessitating a minimum of 24GB VRAM.
Given its experimental nature, it is very likely that this model can produce garbage (biased & wrong) output from time to time, so **use it with caution.**
In order to install PEFT modules, please visit:
* https://github.com/huggingface/peft
### Load the Model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained("facebook/xglm-7.5B",
quantization_config=bnb_config,
device_map="auto")
model = PeftModel.from_pretrained(model, "myzens/XGLM_TR_FineTune_alpha")
tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-7.5B")
```
### Text Generation Examples
Here's a quick and dirty implementation of a post-processing function in case you want to get rid of any garbage letters etc (if any exists):
```python
import re
def post_process(text, how_many_sentences=5):
start_index = text.find("### Cevap: ") + len("### Cevap: ")
end_index = text.find("️", start_index)
extracted_text = text[start_index:end_index].strip()
sentences = extracted_text.split('.')
sentences = '.'.join(sentences[:how_many_sentences])
cleaned_text = ''.join(ch for ch in sentences if ch.isalnum() or ch.isspace() or ch == ".")
cleaned_text = cleaned_text.replace(" ", " ")
cleaned_text = re.sub('\.\.+', '.', cleaned_text)
cleaned_text = cleaned_text.strip()
return cleaned_text
```
#### Basic Sentiment Analysis
```python
PROMPT = """Aşağıda, daha fazla bağlam sağlayan bir girdi ile eşleştirilmiş bir görevi açıklayan bir talimat bulunmaktadır. İsteği uygun şekilde tamamlayan bir yanıt yazın.
### Talimat: Sana verilen cümleyi olumlu, olumsuz veya tarafsız olarak sınıflandır
### Giriş: Tamam iyi güzel kargo zamanında geliyor. HB ile ilgili bi sıkıntı yok. Ama bu ürün çok kullanışsız.
### Cevap:"""
inputs = tokenizer(PROMPT, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
temperature=0.1,
top_p=0.2,
repetition_penalty=1.2,
max_length=128,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id
)
for generated_text in generation_output:
story = tokenizer.decode(generated_text, skip_special_tokens=True)
print(post_process(story, 1)) # Olumsuz
```
#### General
Use `how_many_sentences = -1`.
```python
PROMPT = """Aşağıda, daha fazla bağlam sağlayan bir girdi ile eşleştirilmiş bir görevi açıklayan bir talimat bulunmaktadır. İsteği uygun şekilde tamamlayan bir yanıt yazın.
### Talimat:
### Giriş: Sanatın toplum için önemini açıkla.
### Cevap:"""
inputs = tokenizer(PROMPT, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
temperature=0.8,
top_p=0.9,
repetition_penalty=1.2,
max_length=512,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id
)
"""
Sanat insanlığın gelişimine katkıda bulunan önemli bir araçtır ve birçok farklı alanda kullanılabilir.
Örneğin sanat eserleri görsel olarak bilgiyi aktarmak veya duyguları ifade etmek gibi çeşitli amaçlar sağlayabilirler.
Ayrıca sanatçılar yaratıcılıklarını kullanarak yeni fikirler üretebilmelerine olanak tanıyarak toplumun ilerlemesine yardımcı olabilirler.
Son olarak sanatsal ürünler insanlar arasında iletişim kurmaya da yardımcı olurken aynı zamanda kültürel etkileşimde de büyük rol oynarlar.
Bu nedenle sanatın toplumsal faydası yadsınamazdır
"""
```
#### Suggestions
```python
PROMPT = """Aşağıda, daha fazla bağlam sağlayan bir girdi ile eşleştirilmiş bir görevi açıklayan bir talimat bulunmaktadır. İsteği uygun şekilde tamamlayan bir yanıt yazın.
### Talimat:
### Giriş: Ders çalışmak için önerilerde bulun.
### Cevap:"""
generation_output = model.generate(
input_ids=input_ids,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.2,
max_length=512,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id
)
"""
1. Çalışmak istediğiniz konuyu belirleyin ve ders çalışmaya başlamadan önce o konudaki kaynakları araştırın.
2. Dersi takip etmek için düzenli olarak zaman ayırmaya çalışın.
3. Birden çok görev yapmayı deneyin - bu sayede her gün birkaç saatinizi verimli kullanabilirsiniz.
4. Herhangi bir soru veya sorunla karşılaştığınız anda öğretmeninize danışın.
5. Kendinizi motive etmenin yollarına odaklanın - böylece kendinize güvenebilirsiniz ve motivasyonunuzu sürdürmek kolaylaşır.
6. Öğrenme sürecinde hata yapmaktan korkmayın! Hatalar öğrenmenin doğal bir parçasıdır; ancak yanlış yaptığınızda düzeltmeniz gereken şeylerin farkına varmanız önemlidir.
7. Başarısızlık duygusunu kabul edin ve başarının yolunu bulmanıza yardımcı olacak yeni yollar arayın.
8. Başkalarına yardım etmeye çalışarak kendi gelişiminize katkı sağlayın.
9. Sınavlara hazırlık yaparken dikkat dağıtıcı aktivitelerden kaçının.
10. Sonunda başarılı olduğunuzun garantisini vermeyin. Bu sadece sizi stresli hale getirebilir.
11. Deneme sınavlarını tekrarlayın ve sonuçları değerlendirerek ilerlemeyi izleyin.
12. Daha iyi sonuçlar elde etme yolunda adım atmak için sabırlı olun.
13. Yeni bilgiler öğrendiğinizde bunları paylaşmayı unutmayın.
14. Motivasyonunu kaybetmekten korkan öğrencilerle iletişim kurmaktan çekinmeyin.
15. Soru sorduğunuzda olumlu geri bildirim almak önemli olduğundan emin olun.
16. Arkadaşlarınızla birlikte çalışırken destekleyici olmakta fayda vardır.
17. Öğretmeniniz tarafından verilen ödevleri eksiksiz yapın.
18. Dikkat dağıtan diğer etkinliklerden uzak durun.
19. Zaman yönetimini öğrenin ve planlı hareket ederek zamanı etkili kullanın.
20. Uyku düzenini koruyun ve sağlıklı beslenmeye özen gösterin.
21. Stresle başa çıkma becerilerinizi geliştirin.
22. Hayatınızda başkalarının da sizin gibi zorluklar yaşadığını hatırlayın.
23. Farkındalık yaratmak için farklı yöntemler deneyerek zihinsel sağlığınızı korumaya çalışın.
24. Eğer yeterince konsantre olamıyorsanız dinlenmeye izin verin.
"""
```
### Future Steps
1) We are aiming to decrease the inference cost while maintaining the quality in the outputs.
2) We'll continue to improve our dataset.
### Contact
We would like to thank [Ünver Çiftçi](https://www.linkedin.com/in/unverciftci/) who has connected us in the first place.
| | Task | LinkedIn | GitHub |
|:---:|:---:|:---:|:---:|
| Kaan Bıçakcı | Training + Data Preprocessing | [Contact](https://www.linkedin.com/in/kaanbicakci/) | https://github.com/Frightera |
| Talha Rüzgar Akkuş | Data Collection + General Idea | [Contact](https://www.linkedin.com/in/talha-r%C3%BCzgar-akku%C5%9F-1b5457264/) | https://github.com/LegallyCoder |
| Ethem Yağız Çalık | Data Collection + General Idea | [Contact](https://www.linkedin.com/in/ethem-ya%C4%9F%C4%B1z-%C3%A7al%C4%B1k-799a73275/) | https://github.com/Weyaxi | |
Subsets and Splits