modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-31 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-31 18:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Omar95farag/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_text_visual_concat_only | Omar95farag | 2024-01-15T09:53:58Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-12T08:10:10Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_text_visual_concat_only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_text_visual_concat_only
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1314
- Accuracy: 0.7925
- Exit 0 Accuracy: 0.075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|
| No log | 0.96 | 16 | 2.6835 | 0.1175 | 0.0775 |
| No log | 1.98 | 33 | 2.4924 | 0.2375 | 0.0675 |
| No log | 3.0 | 50 | 2.2916 | 0.34 | 0.07 |
| No log | 3.96 | 66 | 2.0277 | 0.475 | 0.07 |
| No log | 4.98 | 83 | 1.7026 | 0.5775 | 0.07 |
| No log | 6.0 | 100 | 1.4354 | 0.6625 | 0.075 |
| No log | 6.96 | 116 | 1.2357 | 0.6975 | 0.07 |
| No log | 7.98 | 133 | 1.0754 | 0.745 | 0.0725 |
| No log | 9.0 | 150 | 0.9760 | 0.77 | 0.07 |
| No log | 9.96 | 166 | 0.8967 | 0.765 | 0.07 |
| No log | 10.98 | 183 | 0.8561 | 0.7875 | 0.07 |
| No log | 12.0 | 200 | 0.8837 | 0.7575 | 0.07 |
| No log | 12.96 | 216 | 0.8885 | 0.7725 | 0.07 |
| No log | 13.98 | 233 | 0.7819 | 0.7725 | 0.0675 |
| No log | 15.0 | 250 | 0.9067 | 0.7725 | 0.07 |
| No log | 15.96 | 266 | 0.9086 | 0.775 | 0.075 |
| No log | 16.98 | 283 | 0.8444 | 0.795 | 0.07 |
| No log | 18.0 | 300 | 0.9359 | 0.7775 | 0.0725 |
| No log | 18.96 | 316 | 0.9696 | 0.7825 | 0.07 |
| No log | 19.98 | 333 | 0.9254 | 0.7825 | 0.075 |
| No log | 21.0 | 350 | 0.9879 | 0.7775 | 0.0675 |
| No log | 21.96 | 366 | 0.9894 | 0.79 | 0.075 |
| No log | 22.98 | 383 | 1.0087 | 0.785 | 0.07 |
| No log | 24.0 | 400 | 1.0101 | 0.785 | 0.07 |
| No log | 24.96 | 416 | 1.0188 | 0.7875 | 0.075 |
| No log | 25.98 | 433 | 1.0266 | 0.79 | 0.075 |
| No log | 27.0 | 450 | 1.0357 | 0.79 | 0.07 |
| No log | 27.96 | 466 | 1.0505 | 0.7875 | 0.0675 |
| No log | 28.98 | 483 | 1.0524 | 0.7825 | 0.07 |
| 1.6927 | 30.0 | 500 | 1.0656 | 0.785 | 0.075 |
| 1.6927 | 30.96 | 516 | 1.0642 | 0.785 | 0.0725 |
| 1.6927 | 31.98 | 533 | 1.0740 | 0.785 | 0.0725 |
| 1.6927 | 33.0 | 550 | 1.0830 | 0.785 | 0.075 |
| 1.6927 | 33.96 | 566 | 1.0860 | 0.785 | 0.0675 |
| 1.6927 | 34.98 | 583 | 1.0939 | 0.7875 | 0.0675 |
| 1.6927 | 36.0 | 600 | 1.0969 | 0.79 | 0.0675 |
| 1.6927 | 36.96 | 616 | 1.0966 | 0.7875 | 0.07 |
| 1.6927 | 37.98 | 633 | 1.1024 | 0.79 | 0.07 |
| 1.6927 | 39.0 | 650 | 1.1045 | 0.7875 | 0.075 |
| 1.6927 | 39.96 | 666 | 1.1050 | 0.7875 | 0.0725 |
| 1.6927 | 40.98 | 683 | 1.1085 | 0.7875 | 0.075 |
| 1.6927 | 42.0 | 700 | 1.1171 | 0.7875 | 0.075 |
| 1.6927 | 42.96 | 716 | 1.1193 | 0.7875 | 0.075 |
| 1.6927 | 43.98 | 733 | 1.1188 | 0.79 | 0.075 |
| 1.6927 | 45.0 | 750 | 1.1220 | 0.79 | 0.0725 |
| 1.6927 | 45.96 | 766 | 1.1270 | 0.79 | 0.0725 |
| 1.6927 | 46.98 | 783 | 1.1268 | 0.7875 | 0.075 |
| 1.6927 | 48.0 | 800 | 1.1264 | 0.79 | 0.075 |
| 1.6927 | 48.96 | 816 | 1.1267 | 0.795 | 0.075 |
| 1.6927 | 49.98 | 833 | 1.1273 | 0.79 | 0.0725 |
| 1.6927 | 51.0 | 850 | 1.1268 | 0.79 | 0.07 |
| 1.6927 | 51.96 | 866 | 1.1283 | 0.795 | 0.07 |
| 1.6927 | 52.98 | 883 | 1.1293 | 0.795 | 0.075 |
| 1.6927 | 54.0 | 900 | 1.1306 | 0.795 | 0.075 |
| 1.6927 | 54.96 | 916 | 1.1306 | 0.795 | 0.075 |
| 1.6927 | 55.98 | 933 | 1.1310 | 0.795 | 0.075 |
| 1.6927 | 57.0 | 950 | 1.1314 | 0.7925 | 0.075 |
| 1.6927 | 57.6 | 960 | 1.1314 | 0.7925 | 0.075 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Omar95farag/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_vision_only | Omar95farag | 2024-01-15T09:53:58Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-12T04:42:47Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_vision_only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_vision_only
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2733
- Accuracy: 0.7825
- Exit 0 Accuracy: 0.0775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|
| No log | 0.96 | 16 | 2.6800 | 0.13 | 0.0675 |
| No log | 1.98 | 33 | 2.4446 | 0.28 | 0.0725 |
| No log | 3.0 | 50 | 2.1924 | 0.37 | 0.0675 |
| No log | 3.96 | 66 | 1.8733 | 0.5175 | 0.07 |
| No log | 4.98 | 83 | 1.6056 | 0.6075 | 0.0775 |
| No log | 6.0 | 100 | 1.3480 | 0.6725 | 0.08 |
| No log | 6.96 | 116 | 1.1393 | 0.735 | 0.07 |
| No log | 7.98 | 133 | 1.0738 | 0.7375 | 0.07 |
| No log | 9.0 | 150 | 0.9271 | 0.7725 | 0.075 |
| No log | 9.96 | 166 | 0.8885 | 0.7675 | 0.085 |
| No log | 10.98 | 183 | 0.8669 | 0.76 | 0.075 |
| No log | 12.0 | 200 | 0.8547 | 0.7775 | 0.0725 |
| No log | 12.96 | 216 | 0.8633 | 0.76 | 0.07 |
| No log | 13.98 | 233 | 0.8498 | 0.7675 | 0.075 |
| No log | 15.0 | 250 | 0.9608 | 0.7675 | 0.0675 |
| No log | 15.96 | 266 | 0.8952 | 0.7875 | 0.08 |
| No log | 16.98 | 283 | 0.9486 | 0.7575 | 0.0725 |
| No log | 18.0 | 300 | 0.9826 | 0.765 | 0.0825 |
| No log | 18.96 | 316 | 1.0230 | 0.7625 | 0.09 |
| No log | 19.98 | 333 | 1.0961 | 0.76 | 0.0875 |
| No log | 21.0 | 350 | 1.0083 | 0.785 | 0.07 |
| No log | 21.96 | 366 | 1.0394 | 0.7725 | 0.0725 |
| No log | 22.98 | 383 | 1.0825 | 0.78 | 0.085 |
| No log | 24.0 | 400 | 1.0789 | 0.77 | 0.075 |
| No log | 24.96 | 416 | 1.1030 | 0.7725 | 0.0925 |
| No log | 25.98 | 433 | 1.1252 | 0.775 | 0.075 |
| No log | 27.0 | 450 | 1.1333 | 0.7725 | 0.0725 |
| No log | 27.96 | 466 | 1.1416 | 0.765 | 0.0775 |
| No log | 28.98 | 483 | 1.1442 | 0.7775 | 0.0775 |
| 1.6768 | 30.0 | 500 | 1.1620 | 0.7825 | 0.1025 |
| 1.6768 | 30.96 | 516 | 1.1617 | 0.7825 | 0.0775 |
| 1.6768 | 31.98 | 533 | 1.1788 | 0.775 | 0.0875 |
| 1.6768 | 33.0 | 550 | 1.1858 | 0.7725 | 0.0825 |
| 1.6768 | 33.96 | 566 | 1.1842 | 0.7825 | 0.0725 |
| 1.6768 | 34.98 | 583 | 1.1964 | 0.785 | 0.085 |
| 1.6768 | 36.0 | 600 | 1.2034 | 0.78 | 0.075 |
| 1.6768 | 36.96 | 616 | 1.2050 | 0.7825 | 0.07 |
| 1.6768 | 37.98 | 633 | 1.2111 | 0.7825 | 0.075 |
| 1.6768 | 39.0 | 650 | 1.2217 | 0.785 | 0.0925 |
| 1.6768 | 39.96 | 666 | 1.2510 | 0.7775 | 0.105 |
| 1.6768 | 40.98 | 683 | 1.2512 | 0.7825 | 0.0825 |
| 1.6768 | 42.0 | 700 | 1.2529 | 0.7775 | 0.0775 |
| 1.6768 | 42.96 | 716 | 1.2557 | 0.78 | 0.0725 |
| 1.6768 | 43.98 | 733 | 1.2615 | 0.7775 | 0.0775 |
| 1.6768 | 45.0 | 750 | 1.2621 | 0.78 | 0.0825 |
| 1.6768 | 45.96 | 766 | 1.2613 | 0.785 | 0.075 |
| 1.6768 | 46.98 | 783 | 1.2614 | 0.78 | 0.075 |
| 1.6768 | 48.0 | 800 | 1.2598 | 0.7825 | 0.075 |
| 1.6768 | 48.96 | 816 | 1.2650 | 0.7825 | 0.085 |
| 1.6768 | 49.98 | 833 | 1.2665 | 0.7825 | 0.08 |
| 1.6768 | 51.0 | 850 | 1.2673 | 0.785 | 0.0775 |
| 1.6768 | 51.96 | 866 | 1.2626 | 0.7775 | 0.075 |
| 1.6768 | 52.98 | 883 | 1.2643 | 0.7825 | 0.075 |
| 1.6768 | 54.0 | 900 | 1.2702 | 0.78 | 0.0775 |
| 1.6768 | 54.96 | 916 | 1.2723 | 0.78 | 0.0775 |
| 1.6768 | 55.98 | 933 | 1.2730 | 0.7825 | 0.0775 |
| 1.6768 | 57.0 | 950 | 1.2732 | 0.7825 | 0.0775 |
| 1.6768 | 57.6 | 960 | 1.2733 | 0.7825 | 0.0775 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Omar95farag/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_text_vision_only | Omar95farag | 2024-01-15T09:53:57Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-12T01:20:21Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_text_vision_only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-12_text_vision_only
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3116
- Accuracy: 0.7775
- Exit 0 Accuracy: 0.065
- Exit 1 Accuracy: 0.09
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|
| No log | 0.96 | 16 | 2.6827 | 0.1275 | 0.0625 | 0.04 |
| No log | 1.98 | 33 | 2.5158 | 0.25 | 0.06 | 0.095 |
| No log | 3.0 | 50 | 2.3010 | 0.33 | 0.0625 | 0.095 |
| No log | 3.96 | 66 | 1.9997 | 0.435 | 0.055 | 0.0925 |
| No log | 4.98 | 83 | 1.7239 | 0.6 | 0.06 | 0.0925 |
| No log | 6.0 | 100 | 1.4812 | 0.6175 | 0.06 | 0.09 |
| No log | 6.96 | 116 | 1.2872 | 0.6875 | 0.0625 | 0.09 |
| No log | 7.98 | 133 | 1.1118 | 0.74 | 0.055 | 0.09 |
| No log | 9.0 | 150 | 1.0144 | 0.7425 | 0.0625 | 0.09 |
| No log | 9.96 | 166 | 0.9663 | 0.7475 | 0.0575 | 0.09 |
| No log | 10.98 | 183 | 0.9532 | 0.7475 | 0.0625 | 0.09 |
| No log | 12.0 | 200 | 0.9157 | 0.7525 | 0.06 | 0.09 |
| No log | 12.96 | 216 | 0.8894 | 0.77 | 0.06 | 0.09 |
| No log | 13.98 | 233 | 0.9460 | 0.75 | 0.0625 | 0.09 |
| No log | 15.0 | 250 | 1.0019 | 0.745 | 0.0625 | 0.09 |
| No log | 15.96 | 266 | 0.9059 | 0.77 | 0.0625 | 0.0875 |
| No log | 16.98 | 283 | 1.0664 | 0.7325 | 0.06 | 0.0875 |
| No log | 18.0 | 300 | 1.0637 | 0.74 | 0.065 | 0.0875 |
| No log | 18.96 | 316 | 1.0398 | 0.7725 | 0.09 | 0.085 |
| No log | 19.98 | 333 | 1.0745 | 0.775 | 0.06 | 0.0875 |
| No log | 21.0 | 350 | 1.0653 | 0.78 | 0.0625 | 0.0875 |
| No log | 21.96 | 366 | 1.0705 | 0.785 | 0.065 | 0.0875 |
| No log | 22.98 | 383 | 1.1014 | 0.78 | 0.0725 | 0.0875 |
| No log | 24.0 | 400 | 1.1335 | 0.78 | 0.0625 | 0.0875 |
| No log | 24.96 | 416 | 1.1510 | 0.775 | 0.0725 | 0.0875 |
| No log | 25.98 | 433 | 1.1528 | 0.7825 | 0.0675 | 0.0875 |
| No log | 27.0 | 450 | 1.1758 | 0.7825 | 0.0625 | 0.0875 |
| No log | 27.96 | 466 | 1.1836 | 0.785 | 0.07 | 0.0875 |
| No log | 28.98 | 483 | 1.1927 | 0.78 | 0.0675 | 0.0875 |
| 1.6955 | 30.0 | 500 | 1.2061 | 0.7825 | 0.0775 | 0.0875 |
| 1.6955 | 30.96 | 516 | 1.2128 | 0.7775 | 0.065 | 0.0875 |
| 1.6955 | 31.98 | 533 | 1.2172 | 0.7725 | 0.07 | 0.0875 |
| 1.6955 | 33.0 | 550 | 1.2237 | 0.775 | 0.075 | 0.0875 |
| 1.6955 | 33.96 | 566 | 1.2399 | 0.7775 | 0.0625 | 0.0875 |
| 1.6955 | 34.98 | 583 | 1.2590 | 0.78 | 0.065 | 0.0875 |
| 1.6955 | 36.0 | 600 | 1.2586 | 0.7825 | 0.065 | 0.0875 |
| 1.6955 | 36.96 | 616 | 1.2603 | 0.775 | 0.0675 | 0.0875 |
| 1.6955 | 37.98 | 633 | 1.2576 | 0.78 | 0.065 | 0.0875 |
| 1.6955 | 39.0 | 650 | 1.2698 | 0.7775 | 0.075 | 0.0875 |
| 1.6955 | 39.96 | 666 | 1.2775 | 0.7725 | 0.075 | 0.0875 |
| 1.6955 | 40.98 | 683 | 1.2769 | 0.7725 | 0.07 | 0.0875 |
| 1.6955 | 42.0 | 700 | 1.2769 | 0.7725 | 0.0625 | 0.0875 |
| 1.6955 | 42.96 | 716 | 1.2804 | 0.775 | 0.0675 | 0.0875 |
| 1.6955 | 43.98 | 733 | 1.2834 | 0.775 | 0.065 | 0.085 |
| 1.6955 | 45.0 | 750 | 1.2907 | 0.7775 | 0.0675 | 0.0875 |
| 1.6955 | 45.96 | 766 | 1.2968 | 0.7775 | 0.0675 | 0.0875 |
| 1.6955 | 46.98 | 783 | 1.2981 | 0.7775 | 0.065 | 0.0875 |
| 1.6955 | 48.0 | 800 | 1.3017 | 0.7775 | 0.065 | 0.0875 |
| 1.6955 | 48.96 | 816 | 1.3050 | 0.7775 | 0.0675 | 0.09 |
| 1.6955 | 49.98 | 833 | 1.3050 | 0.775 | 0.07 | 0.09 |
| 1.6955 | 51.0 | 850 | 1.3044 | 0.775 | 0.07 | 0.09 |
| 1.6955 | 51.96 | 866 | 1.3057 | 0.775 | 0.0675 | 0.09 |
| 1.6955 | 52.98 | 883 | 1.3072 | 0.7775 | 0.0675 | 0.09 |
| 1.6955 | 54.0 | 900 | 1.3101 | 0.7775 | 0.0675 | 0.09 |
| 1.6955 | 54.96 | 916 | 1.3119 | 0.7775 | 0.065 | 0.09 |
| 1.6955 | 55.98 | 933 | 1.3116 | 0.7775 | 0.065 | 0.09 |
| 1.6955 | 57.0 | 950 | 1.3115 | 0.7775 | 0.065 | 0.09 |
| 1.6955 | 57.6 | 960 | 1.3116 | 0.7775 | 0.065 | 0.09 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Omar95farag/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-10 | Omar95farag | 2024-01-15T09:53:54Z | 93 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T22:32:58Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-10
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.1413
- Accuracy: 0.7325
- Exit 0 Accuracy: 0.1725
- Exit 1 Accuracy: 0.2175
- Exit 2 Accuracy: 0.6075
- Exit 3 Accuracy: 0.715
- Exit 4 Accuracy: 0.735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.96 | 16 | 16.4817 | 0.17 | 0.0825 | 0.045 | 0.105 | 0.0625 | 0.0625 |
| No log | 1.98 | 33 | 15.9950 | 0.2675 | 0.1 | 0.1325 | 0.195 | 0.1775 | 0.2425 |
| No log | 3.0 | 50 | 14.9811 | 0.475 | 0.1025 | 0.1475 | 0.24 | 0.29 | 0.4425 |
| No log | 3.96 | 66 | 14.0127 | 0.5675 | 0.105 | 0.1425 | 0.27 | 0.3975 | 0.505 |
| No log | 4.98 | 83 | 13.3047 | 0.6075 | 0.125 | 0.1425 | 0.3175 | 0.43 | 0.595 |
| No log | 6.0 | 100 | 12.7573 | 0.6125 | 0.125 | 0.1475 | 0.325 | 0.495 | 0.615 |
| No log | 6.96 | 116 | 12.3656 | 0.645 | 0.1175 | 0.155 | 0.33 | 0.5175 | 0.6375 |
| No log | 7.98 | 133 | 11.9582 | 0.6625 | 0.115 | 0.16 | 0.3525 | 0.5725 | 0.67 |
| No log | 9.0 | 150 | 11.6533 | 0.6825 | 0.1225 | 0.16 | 0.375 | 0.6 | 0.7075 |
| No log | 9.96 | 166 | 11.5143 | 0.685 | 0.1525 | 0.1625 | 0.38 | 0.6 | 0.675 |
| No log | 10.98 | 183 | 11.3152 | 0.6625 | 0.115 | 0.1625 | 0.41 | 0.6225 | 0.6725 |
| No log | 12.0 | 200 | 11.0708 | 0.695 | 0.11 | 0.1625 | 0.425 | 0.6225 | 0.7075 |
| No log | 12.96 | 216 | 11.0412 | 0.6975 | 0.1125 | 0.1575 | 0.4 | 0.645 | 0.685 |
| No log | 13.98 | 233 | 10.8782 | 0.7125 | 0.1425 | 0.165 | 0.4275 | 0.6325 | 0.7075 |
| No log | 15.0 | 250 | 10.7282 | 0.7075 | 0.115 | 0.165 | 0.4225 | 0.65 | 0.7175 |
| No log | 15.96 | 266 | 10.7039 | 0.695 | 0.15 | 0.16 | 0.4375 | 0.6375 | 0.69 |
| No log | 16.98 | 283 | 10.5455 | 0.7125 | 0.13 | 0.165 | 0.4375 | 0.6675 | 0.715 |
| No log | 18.0 | 300 | 10.5214 | 0.7075 | 0.1275 | 0.17 | 0.45 | 0.6825 | 0.7075 |
| No log | 18.96 | 316 | 10.4995 | 0.715 | 0.155 | 0.1725 | 0.4525 | 0.68 | 0.7125 |
| No log | 19.98 | 333 | 10.3224 | 0.725 | 0.1475 | 0.1825 | 0.46 | 0.68 | 0.7225 |
| No log | 21.0 | 350 | 10.4247 | 0.71 | 0.1425 | 0.1825 | 0.4625 | 0.68 | 0.71 |
| No log | 21.96 | 366 | 10.3881 | 0.705 | 0.1375 | 0.1825 | 0.46 | 0.66 | 0.7125 |
| No log | 22.98 | 383 | 10.3065 | 0.715 | 0.1375 | 0.1875 | 0.465 | 0.6925 | 0.7225 |
| No log | 24.0 | 400 | 10.1955 | 0.72 | 0.145 | 0.1875 | 0.4725 | 0.695 | 0.7225 |
| No log | 24.96 | 416 | 10.1607 | 0.72 | 0.165 | 0.19 | 0.4925 | 0.7075 | 0.7175 |
| No log | 25.98 | 433 | 10.2416 | 0.72 | 0.14 | 0.195 | 0.48 | 0.7025 | 0.7275 |
| No log | 27.0 | 450 | 10.1321 | 0.715 | 0.145 | 0.1875 | 0.4925 | 0.7125 | 0.72 |
| No log | 27.96 | 466 | 10.1982 | 0.7275 | 0.145 | 0.1875 | 0.4875 | 0.7075 | 0.73 |
| No log | 28.98 | 483 | 10.2237 | 0.72 | 0.1575 | 0.19 | 0.515 | 0.7 | 0.7225 |
| 10.0174 | 30.0 | 500 | 10.1426 | 0.7175 | 0.1675 | 0.1975 | 0.5275 | 0.7125 | 0.7225 |
| 10.0174 | 30.96 | 516 | 10.1056 | 0.7325 | 0.14 | 0.1975 | 0.515 | 0.715 | 0.7325 |
| 10.0174 | 31.98 | 533 | 10.1616 | 0.7225 | 0.1525 | 0.195 | 0.5275 | 0.7175 | 0.72 |
| 10.0174 | 33.0 | 550 | 10.1053 | 0.7325 | 0.1425 | 0.195 | 0.525 | 0.7125 | 0.7275 |
| 10.0174 | 33.96 | 566 | 10.1581 | 0.7175 | 0.165 | 0.2 | 0.5375 | 0.71 | 0.71 |
| 10.0174 | 34.98 | 583 | 10.0835 | 0.7225 | 0.15 | 0.2025 | 0.5375 | 0.715 | 0.7225 |
| 10.0174 | 36.0 | 600 | 10.1349 | 0.725 | 0.1425 | 0.2 | 0.5375 | 0.7025 | 0.725 |
| 10.0174 | 36.96 | 616 | 10.0424 | 0.7325 | 0.1625 | 0.1975 | 0.545 | 0.7225 | 0.735 |
| 10.0174 | 37.98 | 633 | 10.0692 | 0.73 | 0.155 | 0.195 | 0.5525 | 0.7225 | 0.74 |
| 10.0174 | 39.0 | 650 | 10.0838 | 0.7325 | 0.1625 | 0.1975 | 0.56 | 0.7225 | 0.7375 |
| 10.0174 | 39.96 | 666 | 10.1160 | 0.7275 | 0.1675 | 0.1975 | 0.5575 | 0.7225 | 0.725 |
| 10.0174 | 40.98 | 683 | 10.0971 | 0.735 | 0.1675 | 0.1975 | 0.5625 | 0.7175 | 0.73 |
| 10.0174 | 42.0 | 700 | 10.1207 | 0.73 | 0.165 | 0.2 | 0.5775 | 0.715 | 0.7275 |
| 10.0174 | 42.96 | 716 | 10.1448 | 0.7325 | 0.175 | 0.205 | 0.5775 | 0.7175 | 0.73 |
| 10.0174 | 43.98 | 733 | 10.0945 | 0.735 | 0.1675 | 0.21 | 0.5775 | 0.7175 | 0.735 |
| 10.0174 | 45.0 | 750 | 10.1789 | 0.73 | 0.17 | 0.2175 | 0.5775 | 0.7125 | 0.7275 |
| 10.0174 | 45.96 | 766 | 10.1274 | 0.735 | 0.175 | 0.215 | 0.5875 | 0.7075 | 0.735 |
| 10.0174 | 46.98 | 783 | 10.1656 | 0.735 | 0.155 | 0.2125 | 0.5875 | 0.7125 | 0.7375 |
| 10.0174 | 48.0 | 800 | 10.1557 | 0.7275 | 0.16 | 0.215 | 0.6025 | 0.715 | 0.7325 |
| 10.0174 | 48.96 | 816 | 10.1436 | 0.74 | 0.165 | 0.215 | 0.6025 | 0.7175 | 0.735 |
| 10.0174 | 49.98 | 833 | 10.1474 | 0.7325 | 0.1625 | 0.215 | 0.6 | 0.715 | 0.735 |
| 10.0174 | 51.0 | 850 | 10.1647 | 0.7275 | 0.1725 | 0.2175 | 0.605 | 0.7175 | 0.7325 |
| 10.0174 | 51.96 | 866 | 10.1375 | 0.73 | 0.1775 | 0.215 | 0.6025 | 0.7125 | 0.7375 |
| 10.0174 | 52.98 | 883 | 10.1458 | 0.7325 | 0.1675 | 0.2175 | 0.605 | 0.7125 | 0.7375 |
| 10.0174 | 54.0 | 900 | 10.1527 | 0.7275 | 0.175 | 0.22 | 0.6025 | 0.715 | 0.73 |
| 10.0174 | 54.96 | 916 | 10.1349 | 0.7325 | 0.175 | 0.2175 | 0.6025 | 0.72 | 0.735 |
| 10.0174 | 55.98 | 933 | 10.1376 | 0.7325 | 0.175 | 0.22 | 0.6025 | 0.72 | 0.7325 |
| 10.0174 | 57.0 | 950 | 10.1413 | 0.7325 | 0.1725 | 0.2175 | 0.6075 | 0.715 | 0.7325 |
| 10.0174 | 57.6 | 960 | 10.1413 | 0.7325 | 0.1725 | 0.2175 | 0.6075 | 0.715 | 0.735 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ricardo-H/swin-tiny-patch4-window7-224-finetuned-eurosat | Ricardo-H | 2024-01-15T09:50:51Z | 201 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-01-15T09:32:36Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-CIFAR10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1229
- Accuracy: 0.9618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5796 | 1.0 | 176 | 0.2204 | 0.9462 |
| 0.3995 | 2.0 | 352 | 0.1403 | 0.9582 |
| 0.3781 | 3.0 | 528 | 0.1229 | 0.9618 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Shreyas0706/Lama_Fact | Shreyas0706 | 2024-01-15T09:50:36Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:other",
"region:us"
] | null | 2024-01-12T06:56:38Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: train_2024-01-11-03-12-55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2024-01-11-03-12-55
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the Legal_0706 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Nishant2609/factory | Nishant2609 | 2024-01-15T09:31:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"region:us"
] | null | 2024-01-12T08:17:39Z | ---
library_name: peft
base_model: unsloth/mistral-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
vda1708/gsm8k-llama2-13b | vda1708 | 2024-01-15T09:29:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2024-01-15T09:28:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
hxxris/haaris-final-transformer | hxxris | 2024-01-15T09:29:26Z | 146 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-01-15T09:14:22Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: haaris-final-transformer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-final-transformer
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.0354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.73 | 2 | 2.6456 | 0.0442 |
| No log | 1.45 | 4 | nan | 0.0354 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jaemin12/xlm-roberta-base-finetuned-panx-it | jaemin12 | 2024-01-15T09:29:24Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-15T09:17:56Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8120423108218063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2361
- F1: 0.8120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8019 | 1.0 | 70 | 0.3139 | 0.7286 |
| 0.2941 | 2.0 | 140 | 0.2641 | 0.7975 |
| 0.1851 | 3.0 | 210 | 0.2361 | 0.8120 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.7.0+cu110
- Datasets 2.13.2
- Tokenizers 0.13.3
|
Swisslex/Mixtral-Orca-v0.1 | Swisslex | 2024-01-15T09:29:01Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"de",
"fr",
"it",
"es",
"dataset:Open-Orca/SlimOrca",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-01-15T08:37:12Z | ---
license: apache-2.0
datasets:
- Open-Orca/SlimOrca
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
- de
- fr
- it
- es
library_name: adapter-transformers
pipeline_tag: text-generation
---
# Model Card for Model Swisslex/Mixtral-Orca-v0.1
## Model Details
### Model Description
Finetuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using SFT and DPO.
- **Developed by:** Swisslex
- **Language(s) (NLP):** English, German, French, Italian, Spanish
- **License:** apache-2.0
- **Finetuned from model [optional]:** [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
|
danielhanchen/test_lora_new | danielhanchen | 2024-01-15T09:19:16Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"region:us"
] | null | 2024-01-15T08:48:48Z | ---
library_name: peft
base_model: unsloth/mistral-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
sethuiyer/MedleyMD-GGUF | sethuiyer | 2024-01-15T09:18:34Z | 7 | 1 | null | [
"gguf",
"medical",
"mergekit",
"GGUF",
"en",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-15T09:06:41Z | ---
license: cc-by-nc-nd-4.0
tags:
- medical
- mergekit
- GGUF
language:
- en
---
# MedleyMD

MedleyMD is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [sethuiyer/Dr_Samantha_7b_mistral](https://huggingface.co/sethuiyer/Dr_Samantha_7b_mistral)
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
|
jaemin12/xlm-roberta-base-finetuned-panx-fr | jaemin12 | 2024-01-15T09:16:57Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-15T08:43:20Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8329966329966331
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2704
- F1: 0.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5892 | 1.0 | 191 | 0.3346 | 0.7926 |
| 0.2673 | 2.0 | 382 | 0.2807 | 0.8291 |
| 0.1742 | 3.0 | 573 | 0.2704 | 0.8330 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.7.0+cu110
- Datasets 2.13.2
- Tokenizers 0.13.3
|
Grigorij/mistral_instruct_generation | Grigorij | 2024-01-15T09:12:31Z | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2023-12-26T11:17:34Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0261 | 0.4 | 10 | 2.8697 |
| 2.2616 | 0.8 | 20 | 1.6009 |
| 1.1871 | 1.2 | 30 | 0.9599 |
| 0.8522 | 1.6 | 40 | 0.7228 |
| 0.7375 | 2.0 | 50 | 0.6601 |
| 0.5916 | 2.4 | 60 | 0.6184 |
| 0.6219 | 2.8 | 70 | 0.5957 |
| 0.5025 | 3.2 | 80 | 0.5980 |
| 0.5148 | 3.6 | 90 | 0.5849 |
| 0.5502 | 4.0 | 100 | 0.5639 |
| 0.4414 | 4.4 | 110 | 0.5875 |
| 0.4423 | 4.8 | 120 | 0.5847 |
| 0.43 | 5.2 | 130 | 0.5902 |
| 0.3843 | 5.6 | 140 | 0.6223 |
| 0.4173 | 6.0 | 150 | 0.5788 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0 |
achimvp/q-FrozenLake-v1-4x4-noSlippery | achimvp | 2024-01-15T09:11:00Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-11T13:15:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="achimvp/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
shadowml/SusBagle-34B | shadowml | 2024-01-15T09:03:56Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"jondurbin/bagel-dpo-34b-v0.2",
"SUSTech/SUS-Chat-34B",
"license:apache-2.0",
"region:us"
] | null | 2024-01-15T08:58:38Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- jondurbin/bagel-dpo-34b-v0.2
- SUSTech/SUS-Chat-34B
---
# SusBagle-34B
SusBagle-34B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
* [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jondurbin/bagel-dpo-34b-v0.2
layer_range: [0, 60]
- model: SUSTech/SUS-Chat-34B
layer_range: [0, 60]
merge_method: slerp
base_model: jondurbin/bagel-dpo-34b-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shadowml/SusBagle-34B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mikewatson/gpt2-wikitext2 | mikewatson | 2024-01-15T08:55:12Z | 180 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T08:54:59Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 7.3791
- eval_runtime: 10.5986
- eval_samples_per_second: 182.477
- eval_steps_per_second: 22.833
- epoch: 0.11
- step: 241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
kalomaze/Kunoichi-DPO-v2-7B-GGUF | kalomaze | 2024-01-15T08:53:07Z | 10 | 4 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-15T07:46:34Z | ---
license: apache-2.0
---

|
s3nh/Venus-120b-v1.2-GGUF | s3nh | 2024-01-15T08:52:13Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T07:46:35Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
Once upon a time in
# Original model card
|
xyz2zyx/Reinforce-CartPole-v1 | xyz2zyx | 2024-01-15T08:43:22Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-15T08:43:12Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Intel/neural-chat-7b-v1-1 | Intel | 2024-01-15T08:40:04Z | 38 | 23 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"LLMs",
"Intel",
"custom_code",
"en",
"dataset:Intel/neural-chat-dataset-v1-1",
"dataset:allenai/real-toxicity-prompts",
"base_model:mosaicml/mpt-7b",
"base_model:finetune:mosaicml/mpt-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T05:20:07Z | ---
license: apache-2.0
tags:
- LLMs
- Intel
base_model: mosaicml/mpt-7b
datasets:
- Intel/neural-chat-dataset-v1-1
- allenai/real-toxicity-prompts
language:
- en
model-index:
- name: neural-chat-7b-v1-1
results:
- task:
type: Large Language Model
name: Large Language Model
dataset:
type: Intel/neural-chat-dataset-v1-1
name: Intel/neural-chat-dataset-v1-1
metrics:
- type: Average
value: 51.41
name: Average
verified: true
- type: ARC (25-shot)
value: 50.09
name: ARC (25-shot)
verified: true
- type: HellaSwag (10-shot)
value: 76.69
name: HellaSwag (10-shot)
verified: true
- type: MMLU (5-shot)
value: 38.79
name: MMLU (5-shot)
verified: true
- type: TruthfulQA (0-shot)
value: 40.07
name: TruthfulQA (0-shot)
verified: true
- type: Toxicity Rito
value: 0.0264
name: Toxicity Rito
---
## Model Details: Neural-Chat-v1-1
This model is a fine-tuned model for chat based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) with a max sequence length of 2048 on the dataset [Intel/neural-chat-dataset-v1-1](https://huggingface.co/datasets/Intel/neural-chat-dataset-v1-1), which is a compilation of open-source datasets.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6297f0e30bd2f58c647abb1d/fWCqhGKZQKNuLmvj093rB.jpeg" width="500"/>
Prompt of "an image of a brain that has to do with LLMs" from https://clipdrop.co/stable-diffusion-turbo.
</p>
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors | Intel. The NeuralChat team with members from DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen. |
| Date | July, 2023 |
| Version | v1-1 |
| Type | 7B Large Language Model |
| Paper or Other Resources | Base model: [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b); Dataset: [Intel/neural-chat-dataset-v1-1](https://huggingface.co/datasets/Intel/neural-chat-dataset-v1-1) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/neural-chat-7b-v1-1/discussions) and [Intel DevHub Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the fine-tuned model for several language-related tasks. Checkout the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see this model's performance relative to other LLMs. |
| Primary intended users | Anyone doing inference on language-related tasks. |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
## How To Use
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 3.0
## Use The Model
### Loading the model with Transformers
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'Intel/neural-chat-7b-v1-1',
trust_remote_code=True
)
```
### Inference with INT8
Follow the instructions at the [GitHub repository](https://github.com/intel/intel-extension-for-transformers/tree/main/examples/huggingface/pytorch/text-generation/quantization) to install the necessary dependencies for quantization to INT8. Use the below command to quantize the model using [Intel Neural Compressor](https://github.com/intel/neural-compressor) to accelerate inference.
```bash
python run_generation.py \
--model Intel/neural-chat-7b-v1-1 \
--quantize \
--sq \
--alpha 0.95 \
--ipex
```
| Factors | Description |
| ----------- | ----------- |
| Groups | More details about the dataset can be found at [Intel/neural-chat-dataset-v1-1](https://huggingface.co/datasets/Intel/neural-chat-dataset-v1-1). |
| Instrumentation | The performance of the model can vary depending on the inputs to the model. In this case, the prompts provided can drastically change the prediction of the language model. |
| Environment | - |
| Card Prompts | Model deployment on varying hardware and software will change model performance. |
| Metrics | Description |
| ----------- | ----------- |
| Model performance measures | The model metrics are: ARC, HellaSwag, MMLU, and TruthfulQA. Bias evaluation was also evaluated using using Toxicity Rito (see Quantitative Analyses below). The model performance was evaluated against other LLMs according to the standards at the time the model was published. |
| Decision thresholds | No decision thresholds were used. |
| Approaches to uncertainty and variability | - |
## Training Data
The training data are from [Intel/neural-chat-dataset-v1-1](https://huggingface.co/datasets/Intel/neural-chat-dataset-v1-1). The total number of instruction samples is about 1.1M, and the number of tokens is 326M. This dataset is composed of several other datasets:
| Type | Language | Dataset | Number |
|--| ---- |--------|----|
| HC3 | en | [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) | 24K |
| dolly | en | [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | 15K |
| alpaca-zh | zh | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-zh-0.5m) | 500K |
| alpaca-en | en | [TigerResearch/tigerbot-alpaca-en-50k](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-en-50k) | 50K |
| math | en | [tigerbot-gsm-8k-en](https://huggingface.co/datasets/TigerResearch/tigerbot-gsm-8k-en) | 8K |
| general | en | [tigerbot-stackexchange-qa-en-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-stackexchange-qa-en-0.5m) | 500K |
Note: There is no contamination from the GSM8k test set, as this is not a part of this dataset.
## Quantitative Analyses
### LLM metrics
We used the same evaluation metrics as [HuggingFaceH4/open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), which uses [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/master), a unified framework to test generative language models on a large number of different evaluation tasks.
| Model | Average ⬆️| ARC (25-s) ⬆️ | HellaSwag (10-s) ⬆️ | MMLU (5-s) ⬆️| TruthfulQA (MC) (0-s) ⬆️ |
| --- | --- | --- | --- | --- | --- |
|[mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)| 47.4 | 47.61 | 77.56 | 31 | 33.43 |
| [mosaicml/mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | **49.95** | 46.5 | 75.55 | 37.60 | 40.17 |
| [Intel/neural-chat-dataset-v1-1](https://huggingface.co/Intel/neural-chat-dataset-v1-1) | **51.41** | 50.09 | 76.69 | 38.79 | 40.07 |
### Bias evaluation
Following the blog [evaluating-llm-bias](https://huggingface.co/blog/evaluating-llm-bias), we selected 10000 samples randomly from [allenai/real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) to evaluate toxicity bias.
| Model | Toxicity Rito ↓|
| --- | --- |
|[mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)| 0.027 |
| [Intel/neural-chat-dataset-v1-1](https://huggingface.co/Intel/neural-chat-dataset-v1-1) | 0.0264 |
### Examples
- code generation

- summarization

- trip

## Ethical Considerations and Limitations
Neural-chat-7b-v1-1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v1-1 was trained on various instruction/chat datasets based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are some useful GitHub repository links to learn more about Intel's open-source AI software:
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
* Intel Extension for PyTorch [link](https://github.com/intel/intel-extension-for-pytorch)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
|
nm-testing/MiniChat-2-3B-pruned50-ds | nm-testing | 2024-01-15T08:37:56Z | 2 | 0 | transformers | [
"transformers",
"onnx",
"llama",
"text-generation",
"deepsparse",
"arxiv:2301.00774",
"base_model:GeneZC/MiniChat-2-3B",
"base_model:quantized:GeneZC/MiniChat-2-3B",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-01-15T08:05:29Z | ---
base_model: GeneZC/MiniChat-2-3B
inference: false
model_type: llama
prompt_template: |
<s> [|User|]\n
{prompt}</s>
[|Assistant|]\n
quantized_by: mwitiderrick
tags:
- deepsparse
---
# MiniChat-2-3B - DeepSparse
This repo contains model files for [MiniChat-2-3B](https://huggingface.co/GeneZC/MiniChat-2-3B) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs:
```bash
pip install deepsparse-nightly[llm]
```
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md):
```python
from deepsparse import TextGeneration
prompt = "How to get in a good university?"
formatted_prompt = f"<s> [|User|]\n{prompt}</s>[|Assistant|]\n"
model = TextGeneration(model_path="hf:nm-testing/MiniChat-2-3B-pruned50-ds")
print(model(formatted_prompt, max_new_tokens=500).generations[0].text)
"""
Getting into a good university is a complex process that involves several steps. However, here are some key factors to consider:
1. Academic performance: Your grades, test scores, and overall academic achievements are essential in demonstrating your academic abilities. Strive to maintain a high GPA and achieve strong scores in standardized tests like the SAT, ACT, or AP exams.
2. Academic preparation: Develop a strong foundation in various subjects, including English, math, science, and foreign languages. This will help you succeed academically and demonstrate your readiness for college-level courses.
3. Extracurricular activities: Participate in extracurricular activities such as clubs, sports teams, volunteering, or leadership roles. These activities can help you develop valuable skills, demonstrate your leadership abilities, and showcase your interests outside the classroom.
4. Academic preparation for college-level courses: Research and understand the curriculum of the universities you are interested in. Familiarize yourself with the coursework, coursework requirements, and any potential prerequisites.
5. Personal qualities and extracurricular activities: Showcase your unique qualities and extracurricular activities that demonstrate your leadership, teamwork, and problem-solving abilities. Universities value students who are well-rounded and have a diverse set of skills.
6. Application process: Follow the university's application process, which may include submitting an application form, paying the application fee, and submitting any required documents.
7. Interviews and assessments: If you are invited to an interview, prepare for it by researching the university, its campus, and its mission. Be confident, articulate, and demonstrate your enthusiasm for the university.
8. Post-admission process: After being accepted, follow the university's post-admission process, which may include registering for classes, paying tuition fees, and obtaining student housing.
9. Networking and mentorship: Connect with professors, professors, and fellow students to gain insights into the university culture and gain a deeper understanding of the college environment.
10. Financial support: Research financial aid options and scholarships available to help cover tuition fees and living
"""
```
## Prompt template
```
<s> [|User|]\n
{prompt}
</s>[|Assistant|]\n
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py GeneZC/MiniChat-2-3B open_platypus --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
cp deployment/model.onnx deployment/model-orig.onnx
```
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
```python
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
```
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
smutuvi/whisper-small-sw-common-voice-ndizi-158-NF4 | smutuvi | 2024-01-15T08:37:00Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:smutuvi/whisper-small-sw-common-voice",
"base_model:adapter:smutuvi/whisper-small-sw-common-voice",
"license:apache-2.0",
"region:us"
] | null | 2024-01-15T08:36:58Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: smutuvi/whisper-small-sw-common-voice
model-index:
- name: whisper-small-sw-common-voice-ndizi-158-NF4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-sw-common-voice-ndizi-158-NF4
This model is a fine-tuned version of [smutuvi/whisper-small-sw-common-voice](https://huggingface.co/smutuvi/whisper-small-sw-common-voice) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 1.9783 |
| No log | 2.0 | 18 | 1.9757 |
| 1.8194 | 3.0 | 27 | 1.9710 |
| 1.8194 | 4.0 | 36 | 1.9631 |
| 1.8194 | 5.0 | 45 | 1.9543 |
| 1.7712 | 6.0 | 54 | 1.9437 |
| 1.7712 | 7.0 | 63 | 1.9337 |
| 1.7712 | 8.0 | 72 | 1.9244 |
| 1.7377 | 9.0 | 81 | 1.9141 |
| 1.7377 | 10.0 | 90 | 1.9043 |
| 1.7377 | 11.0 | 99 | 1.8952 |
| 1.7377 | 12.0 | 108 | 1.8863 |
| 1.7377 | 13.0 | 117 | 1.8762 |
| 1.6796 | 14.0 | 126 | 1.8684 |
| 1.6796 | 15.0 | 135 | 1.8588 |
| 1.6796 | 16.0 | 144 | 1.8501 |
| 1.6217 | 17.0 | 153 | 1.8414 |
| 1.6217 | 18.0 | 162 | 1.8325 |
| 1.6217 | 19.0 | 171 | 1.8253 |
| 1.6036 | 20.0 | 180 | 1.8169 |
| 1.6036 | 21.0 | 189 | 1.8097 |
| 1.6036 | 22.0 | 198 | 1.8017 |
| 1.6129 | 23.0 | 207 | 1.7949 |
| 1.6129 | 24.0 | 216 | 1.7879 |
| 1.5607 | 25.0 | 225 | 1.7817 |
| 1.5607 | 26.0 | 234 | 1.7760 |
| 1.5607 | 27.0 | 243 | 1.7699 |
| 1.5305 | 28.0 | 252 | 1.7637 |
| 1.5305 | 29.0 | 261 | 1.7588 |
| 1.5305 | 30.0 | 270 | 1.7537 |
| 1.5261 | 31.0 | 279 | 1.7485 |
| 1.5261 | 32.0 | 288 | 1.7444 |
| 1.5261 | 33.0 | 297 | 1.7400 |
| 1.5331 | 34.0 | 306 | 1.7360 |
| 1.5331 | 35.0 | 315 | 1.7322 |
| 1.5331 | 36.0 | 324 | 1.7287 |
| 1.5131 | 37.0 | 333 | 1.7250 |
| 1.5131 | 38.0 | 342 | 1.7216 |
| 1.4883 | 39.0 | 351 | 1.7183 |
| 1.4883 | 40.0 | 360 | 1.7151 |
| 1.4883 | 41.0 | 369 | 1.7119 |
| 1.4551 | 42.0 | 378 | 1.7092 |
| 1.4551 | 43.0 | 387 | 1.7063 |
| 1.4551 | 44.0 | 396 | 1.7042 |
| 1.47 | 45.0 | 405 | 1.7018 |
| 1.47 | 46.0 | 414 | 1.6988 |
| 1.47 | 47.0 | 423 | 1.6967 |
| 1.4422 | 48.0 | 432 | 1.6942 |
| 1.4422 | 49.0 | 441 | 1.6922 |
| 1.4432 | 50.0 | 450 | 1.6904 |
| 1.4432 | 51.0 | 459 | 1.6886 |
| 1.4432 | 52.0 | 468 | 1.6869 |
| 1.4108 | 53.0 | 477 | 1.6849 |
| 1.4108 | 54.0 | 486 | 1.6828 |
| 1.4108 | 55.0 | 495 | 1.6808 |
| 1.4377 | 56.0 | 504 | 1.6797 |
| 1.4377 | 57.0 | 513 | 1.6777 |
| 1.4377 | 58.0 | 522 | 1.6770 |
| 1.4281 | 59.0 | 531 | 1.6748 |
| 1.4281 | 60.0 | 540 | 1.6732 |
| 1.4281 | 61.0 | 549 | 1.6718 |
| 1.374 | 62.0 | 558 | 1.6704 |
| 1.374 | 63.0 | 567 | 1.6695 |
| 1.3947 | 64.0 | 576 | 1.6678 |
| 1.3947 | 65.0 | 585 | 1.6667 |
| 1.3947 | 66.0 | 594 | 1.6656 |
| 1.3843 | 67.0 | 603 | 1.6650 |
| 1.3843 | 68.0 | 612 | 1.6638 |
| 1.3843 | 69.0 | 621 | 1.6620 |
| 1.385 | 70.0 | 630 | 1.6614 |
| 1.385 | 71.0 | 639 | 1.6608 |
| 1.385 | 72.0 | 648 | 1.6592 |
| 1.368 | 73.0 | 657 | 1.6586 |
| 1.368 | 74.0 | 666 | 1.6576 |
| 1.3777 | 75.0 | 675 | 1.6574 |
| 1.3777 | 76.0 | 684 | 1.6565 |
| 1.3777 | 77.0 | 693 | 1.6556 |
| 1.3837 | 78.0 | 702 | 1.6553 |
| 1.3837 | 79.0 | 711 | 1.6547 |
| 1.3837 | 80.0 | 720 | 1.6544 |
| 1.3577 | 81.0 | 729 | 1.6529 |
| 1.3577 | 82.0 | 738 | 1.6523 |
| 1.3577 | 83.0 | 747 | 1.6515 |
| 1.3961 | 84.0 | 756 | 1.6513 |
| 1.3961 | 85.0 | 765 | 1.6512 |
| 1.3961 | 86.0 | 774 | 1.6505 |
| 1.3292 | 87.0 | 783 | 1.6497 |
| 1.3292 | 88.0 | 792 | 1.6495 |
| 1.3578 | 89.0 | 801 | 1.6496 |
| 1.3578 | 90.0 | 810 | 1.6495 |
| 1.3578 | 91.0 | 819 | 1.6488 |
| 1.3352 | 92.0 | 828 | 1.6486 |
| 1.3352 | 93.0 | 837 | 1.6492 |
| 1.3352 | 94.0 | 846 | 1.6487 |
| 1.3873 | 95.0 | 855 | 1.6485 |
| 1.3873 | 96.0 | 864 | 1.6483 |
| 1.3873 | 97.0 | 873 | 1.6481 |
| 1.3391 | 98.0 | 882 | 1.6483 |
| 1.3391 | 99.0 | 891 | 1.6482 |
| 1.3569 | 100.0 | 900 | 1.6482 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Romanos575/Yuzu | Romanos575 | 2024-01-15T08:36:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-15T08:25:53Z | ---
license: creativeml-openrail-m
---
Yuzu all rights goes to Ikena https://civitai.com/models/67120 |
alexrods/q-Taxi-V3 | alexrods | 2024-01-15T08:28:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-15T08:28:08Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="alexrods/q-Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jaemin12/xlm-roberta-base-finetuned-panx-de-fr | jaemin12 | 2024-01-15T08:22:19Z | 78 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-12T12:21:36Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1620
- F1: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2893 | 1.0 | 715 | 0.1785 | 0.8313 |
| 0.1466 | 2.0 | 1430 | 0.1559 | 0.8540 |
| 0.0948 | 3.0 | 2145 | 0.1620 | 0.8611 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.7.0+cu110
- Datasets 2.13.2
- Tokenizers 0.13.3
|
hayate3140/hayate3140butterfly | hayate3140 | 2024-01-15T08:18:45Z | 49 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-01-15T08:18:34Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('hayate3140/hayate3140butterfly')
image = pipeline().images[0]
image
```
|
ayousanz/nekomata-14b-gozaru | ayousanz | 2024-01-15T08:18:17Z | 5 | 0 | peft | [
"peft",
"safetensors",
"base_model:rinna/nekomata-14b-instruction",
"base_model:adapter:rinna/nekomata-14b-instruction",
"region:us"
] | null | 2024-01-15T08:09:15Z | ---
library_name: peft
base_model: rinna/nekomata-14b-instruction
---
# 概要
[rinna/nekomata-14b-instruction](https://huggingface.co/rinna/nekomata-14b-instruction)を[bbz662bbz/databricks-dolly-15k-ja-gozaru](https://huggingface.co/datasets/bbz662bbz/databricks-dolly-15k-ja-gozaru)でfine tuningしたモデル
# 学習パラメータ
```python
# 学習の実行
!python qlora.py \
--model_name rinna/nekomata-14b-instruction \
--output_dir "./output/rinna-nekomata-14b-instruction-gonaru_peft" \
--dataset "alpaca" \
--max_steps 1000 \
--use_auth \
--logging_steps 10 \
--save_strategy steps \
--data_seed 42 \
--save_steps 100 \
--save_total_limit 40 \
--max_new_tokens 100 \
--dataloader_num_workers 1 \
--group_by_length \z
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--lora_r 64 \
--lora_alpha 16 \
--lora_modules all \
--double_quant \
--quant_type nf4 \
--bf16 \
--bits 4 \
--warmup_ratio 0.03 \
--lr_scheduler_type constant \
--gradient_checkpointing \
--source_max_len 16 \
--target_max_len 512 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 16 \
--eval_steps 187 \
--learning_rate 0.0002 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--lora_dropout 0.1 \
--weight_decay 0.0 \
--seed 0 \
--load_in_4bit \
--use_peft \
--batch_size 4 \
--gradient_accumulation_steps 2 \
--trust_remote_code=True
```
# 実行方法(4bit)
## 推論
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
# トークナイザーとモデルの読み込み
tokenizer = AutoTokenizer.from_pretrained(
"rinna/nekomata-14b-instruction"
)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
"rinna/nekomata-14b-instruction",
quantization_config=bnb_config,
device_map={"":0}
)
# LoRAの読み込み
model = PeftModel.from_pretrained(
model,
"./output/rinna-nekomata-14b-instruction-gonaru_peft/checkpoint-1000/adapter_model/",
device_map={"":0}
)
model.eval()
# プロンプトの準備
instruction = "次の回答に正しく回答してください"
input = "まどマギで一番可愛いキャラはなんですか?"
prompt = f"""
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
### 指示:
{instruction}
### 入力:
{input}
### 応答:
"""
# 推論の実行
inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=300,temperature=0.3,do_sample=True,pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 回答
```python
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
### 指示:
次の回答に正しく回答してください
### 入力:
まどマギで一番可愛いキャラはなんですか?
### 応答:
マミさんでござる。
```
### Framework versions
- PEFT 0.7.2.dev0 |
jlvdoorn/whisper-base-atcosim | jlvdoorn | 2024-01-15T08:16:26Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"doi:10.57967/hf/1620",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-13T12:30:23Z | ---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-atcosim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-atcosim
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Wer: 2.9820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.152 | 8.33 | 500 | 0.0522 | 2.5282 |
| 0.001 | 16.67 | 1000 | 0.0539 | 3.0608 |
| 0.0003 | 25.0 | 1500 | 0.0556 | 3.0237 |
| 0.0002 | 33.33 | 2000 | 0.0567 | 3.0237 |
| 0.0001 | 41.67 | 2500 | 0.0579 | 3.0144 |
| 0.0001 | 50.0 | 3000 | 0.0588 | 2.9959 |
| 0.0001 | 58.33 | 3500 | 0.0597 | 3.0052 |
| 0.0001 | 66.67 | 4000 | 0.0604 | 3.0098 |
| 0.0 | 75.0 | 4500 | 0.0610 | 2.9867 |
| 0.0 | 83.33 | 5000 | 0.0615 | 2.9867 |
| 0.0 | 91.67 | 5500 | 0.0619 | 2.9774 |
| 0.0 | 100.0 | 6000 | 0.0620 | 2.9820 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mrchaos88/openllama-3b-peft-squad_v2 | mrchaos88 | 2024-01-15T08:12:36Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"region:us"
] | null | 2024-01-15T06:41:08Z | ---
library_name: peft
base_model: openlm-research/open_llama_3b_v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
dolo650/Mistral-7B-lora-yahma-alpaca-unsloth-bnb | dolo650 | 2024-01-15T08:11:29Z | 4 | 0 | peft | [
"peft",
"safetensors",
"dataset:yahma/alpaca-cleaned",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"license:apache-2.0",
"region:us"
] | null | 2024-01-15T07:42:37Z | ---
library_name: peft
base_model: unsloth/mistral-7b
license: apache-2.0
datasets:
- yahma/alpaca-cleaned
---
# Model Card for Model ID
QLoRA fine tuned model yahma/alpaca-cleaned dataset containing around ~52K data
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
etri-vilab/koala-1b-llava-cap | etri-vilab | 2024-01-15T08:10:53Z | 19 | 13 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"KOALA",
"arxiv:2312.04005",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-01-09T05:49:09Z | ---
tags:
- text-to-image
- KOALA
---
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/yosvi68jvyarbvymxc4hm/github_logo.png?rlkey=r9ouwcd7cqxjbvio43q9b3djd&dl=1" width="1024px" />
</div>
<div style="display:flex;justify-content: center">
<a href="https://youngwanlee.github.io/KOALA/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github-pages"></a>  
<a href="https://github.com/youngwanLEE/sdxl-koala"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>  
<a href="https://arxiv.org/abs/2312.04005"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv:KOALA&color=red&logo=arxiv"></a>  
</div>
# KOALA-1B-LLaVA-Caption Model Card
KOALA-700M and -1B models originally were trained on [LAION-aesthetics-V2 6+](https://laion.ai/blog/laion-aesthetics/) which has some noisy or very short texts.
So we construct synthesized captions of LAION-aesthetics-V2 6+ by using a large multimodal model, [LLaVA](https://llava-vl.github.io/).
KOALA-700M-LLaVA-Caption and KOALA-1B-LLaVA-Caption is trained on the synthesized caption-image pairs of LAION-aesthetics-V2 6+.
## KOALA Model Cards
|Model|link|
|:--|:--|
|koala-700m | https://huggingface.co/etri-vilab/koala-700m|
|koala-700m-llava-cap | https://huggingface.co/etri-vilab/koala-700m-llava-cap|
|koala-1b | https://huggingface.co/etri-vilab/koala-1bm|
|koala-1b-llava-cap | https://huggingface.co/etri-vilab/koala-1b-llava-cap|
## Abstract
### TL;DR
> We propose a fast text-to-image model, called KOALA, by compressing SDXL's U-Net and distilling knowledge from SDXL into our model. KOALA-700M can generate a 1024x1024 image in less than 1.5 seconds on an NVIDIA 4090 GPU, which is more than 2x faster than SDXL. KOALA-700M can be used as a decent alternative between SDM and SDXL in limited resources.
<details><summary>FULL abstract</summary>
Stable diffusion is the mainstay of the text-to-image (T2I) synthesis in the community due to its generation performance and open-source nature.
Recently, Stable Diffusion XL (SDXL), the successor of stable diffusion, has received a lot of attention due to its significant performance improvements with a higher resolution of 1024x1024 and a larger model.
However, its increased computation cost and model size require higher-end hardware (e.g., bigger VRAM GPU) for end-users, incurring higher costs of operation.
To address this problem, in this work, we propose an efficient latent diffusion model for text-to-image synthesis obtained by distilling the knowledge of SDXL.
To this end, we first perform an in-depth analysis of the denoising U-Net in SDXL, which is the main bottleneck of the model, and then design a more efficient U-Net based on the analysis.
Secondly, we explore how to effectively distill the generation capability of SDXL into an efficient U-Net and eventually identify four essential factors, the core of which is that self-attention is the most important part.
With our efficient U-Net and self-attention-based knowledge distillation strategy, we build our efficient T2I models, called KOALA-1B &-700M, while reducing the model size up to 54% and 69% of the original SDXL model.
In particular, the KOALA-700M is more than twice as fast as SDXL while still retaining a decent generation quality.
We hope that due to its balanced speed-performance tradeoff, our KOALA models can serve as a cost-effective alternative to SDXL in resource-constrained environments.
</details>
<br>
These 1024x1024 samples are generated by KOALA-700M with 25 denoising steps.
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/rjsqqgfney7be069y2yr7/teaser.png?rlkey=7lq0m90xpjcoqclzl4tieajpo&dl=1" width="1024px" />
</div>
## Architecture
There are two two types of compressed U-Net, KOALA-1B and KOALA-700M, which are realized by reducing residual blocks and transformer blocks.
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/5ydeywgiyt1d3njw63dpk/arch.png?rlkey=1p6imbjs4lkmfpcxy153i1a2t&dl=1" width="1024px" />
</div>
### U-Net comparison
| U-Net | SDM-v2.0 | SDXL-Base-1.0 | KOALA-1B | KOALA-700M |
|-------|:----------:|:-----------:|:-----------:|:-------------:|
| Param. | 865M | 2,567M | 1,161M | 782M |
| CKPT size | 3.46GB | 10.3GB | 4.4GB | 3.0GB |
| Tx blocks | [1, 1, 1, 1] | [0, 2, 10] | [0, 2, 6] | [0, 2, 5] |
| Mid block | ✓ | ✓ | ✓ | ✗ |
| Latency | 1.131s | 3.133s | 1.604s | 1.257s |
- Tx menans transformer block and CKPT means the trained checkpoint file.
- We measured latency with FP16-precision, and 25 denoising steps in NVIDIA 4090 GPU (24GB).
- SDM-v2.0 uses 768x768 resolution, while SDXL and KOALA models uses 1024x1024 resolution.
## Latency and memory usage comparison on different GPUs
We measure the inference time of SDM-v2.0 with 768x768 resolution and the other models with 1024x1024 using a variety of consumer-grade GPUs: NVIDIA 3060Ti (8GB), 2080Ti (11GB), and 4090 (24GB). We use 25 denoising steps and FP16/FP32 precisions. OOM means Out-of-Memory. Note that SDXL-Base cannot operate in the 8GB-GPU.
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/u1az20y0zfww1l5lhbcyd/latency_gpu.svg?rlkey=vjn3gpkmywmp7jpilar4km7sd&dl=1" width="1024px" />
</div>
## Key Features
- **Efficient U-Net Architecture**: KOALA models use a simplified U-Net architecture that reduces the model size by up to 54% and 69% respectively compared to its predecessor, Stable Diffusion XL (SDXL).
- **Self-Attention-Based Knowledge Distillation**: The core technique in KOALA focuses on the distillation of self-attention features, which proves crucial for maintaining image generation quality.
## Model Description
- Developed by [ETRI Visual Intelligence Lab](https://huggingface.co/etri-vilab)
- Developer: [Youngwan Lee](https://youngwanlee.github.io/), [Kwanyong Park](https://pkyong95.github.io/), [Yoorhim Cho](https://ofzlo.github.io/), [Young-Ju Lee](https://scholar.google.com/citations?user=6goOQh8AAAAJ&hl=en), [Sung Ju Hwang](http://www.sungjuhwang.com/)
- Model Description: Latent Diffusion based text-to-image generative model. KOALA models uses the same text encoders as [SDXL-Base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and only replace the denoising U-Net with the compressed U-Nets.
- Training data: [LAION-aesthetics-V2 6+](https://laion.ai/blog/laion-aesthetics/)
- Resources for more information: Check out [KOALA report on arXiv](https://arxiv.org/abs/2312.04005) and [project page](https://youngwanlee.github.io/KOALA/).
## Usage with 🤗[Diffusers library](https://github.com/huggingface/diffusers)
The inference code with denoising step 25
```python
import torch
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained("etri-vilab/koala-1b-llava-cap", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "A portrait painting of a Golden Retriever like Leonard da Vinci"
negative = "worst quality, low quality, illustration, low resolution"
image = pipe(prompt=prompt, negative_prompt=negative).images[0]
```
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
- Text Rendering: The models face challenges in rendering long, legible text within images.
- Complex Prompts: KOALA sometimes struggles with complex prompts involving multiple attributes.
- Dataset Dependencies: The current limitations are partially attributed to the characteristics of the training dataset (LAION-aesthetics-V2 6+).
## Citation
```bibtex
@misc{Lee@koala,
title={KOALA: Self-Attention Matters in Knowledge Distillation of Latent Diffusion Models for Memory-Efficient and Fast Image Synthesis},
author={Youngwan Lee and Kwanyong Park and Yoorhim Cho and Yong-Ju Lee and Sung Ju Hwang},
year={2023},
eprint={2312.04005},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
hwashang/hs_test1_billsum_model | hwashang | 2024-01-15T08:05:13Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-12T09:53:13Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: hs_test1_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hs_test1_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6341
- Rouge1: 0.1424
- Rouge2: 0.0501
- Rougel: 0.1153
- Rougelsum: 0.1156
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.9223 | 0.1287 | 0.0361 | 0.1068 | 0.107 | 19.0 |
| No log | 2.0 | 124 | 2.7150 | 0.1411 | 0.049 | 0.1157 | 0.1162 | 19.0 |
| No log | 3.0 | 186 | 2.6506 | 0.1396 | 0.0472 | 0.1128 | 0.1133 | 19.0 |
| No log | 4.0 | 248 | 2.6341 | 0.1424 | 0.0501 | 0.1153 | 0.1156 | 19.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
aviralkumar28/ppo-Huggy | aviralkumar28 | 2024-01-15T08:05:05Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-01-15T08:04:59Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aviralkumar28/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
shadowml/Beagle14-7B | shadowml | 2024-01-15T08:01:06Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"fblgit/UNA-TheBeagle-7b-v1",
"argilla/distilabeled-Marcoro14-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T07:56:47Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- fblgit/UNA-TheBeagle-7b-v1
- argilla/distilabeled-Marcoro14-7B-slerp
---
# Beagle14-7B
Beagle14-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: fblgit/UNA-TheBeagle-7b-v1
layer_range: [0, 32]
- model: argilla/distilabeled-Marcoro14-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: fblgit/UNA-TheBeagle-7b-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shadowml/Beagle14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
seobak/xlm-roberta-base-finetuned-panx-en | seobak | 2024-01-15T07:58:54Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-15T07:57:20Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6870824053452116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3990
- F1: 0.6871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0238 | 1.0 | 50 | 0.4988 | 0.5987 |
| 0.4841 | 2.0 | 100 | 0.4144 | 0.6902 |
| 0.3744 | 3.0 | 150 | 0.3990 | 0.6871 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
YeBhoneLin10/bagan | YeBhoneLin10 | 2024-01-15T07:56:59Z | 36 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"arxiv:2112.10752",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-13T10:55:40Z | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- diffusers
widget:
- text: "bagan, by Vincent van gogh, highly detailed, highly illustration"
example_title: "Example Prompt 1"
- text: "Establishing shot of a bagan, an epic fantasy, dramatic lighting, cinematic, extremely high detail, photorealistic, cinematic lighting, matte painting, artstation, by simon stalenhag, uncharted 4: a thief's end"
example_title: "Example Prompt 2"
- text: "hyper realistic wator color painting, transparent, myanmar bagan ancient city, after raining sense, beautiful cloud, ancient pagoda, some trees, with water splash infront of pagoda, lovely cloud, beautiful golden ratio composition, neutral color, moody image, lots of grey, golden ratio composition, grey and moody, more grey, rule of third, --ar 5:3 --q 0.5 --v 5"
example_title: "Example Prompt 3"
base_model: runwayml/stable-diffusion-v1-5
---
### Introduction:
Meet General Bagan, a cutting-edge text-to-image generator trained on a diverse dataset of over 200 images. With a keen understanding of textual inputs, it effortlessly translates words into visually stunning representations. From lifelike nature scenes to captivating abstract compositions.
### Problem Statement:
When we prompted the stable diffusion model to generate an image of Bagan, it produced an image depicting a pagoda from Thailand. Hence, our decision was to fine-tune the current stable diffusion model using a multitude of Bagan photos in order to attain a clearer outcome.
### How to create prompt:
When we create prompt for bagan, we have to consider 6 keywords. Those are Subject, Medium, Style, Art-sharing website, Resolution, and Additional details.
Subject -> What you want to see in the picture is the subject. Not writing enough about the subjects is a common error.
Medium -> The medium is the substance that artists work with. Illustration, oil painting, 3D rendering, and photography are a few examples. The impact of Medium is significant because a single keyword can significantly alter the style.
Style -> The image's artistic style is referred to as the style. Pop art, impressionist, and surrealist are a few examples.
Art-sharing website -> Specialty graphic websites like Deviant Art and Artstation compile a large number of images from various genres. One surefire way to direct the image toward these styles is to use them as a prompt.
Resolution -> Resolution represents how sharp and detailed the image is
Additional Details -> Sweeteners added to an image are additional details. To give the image a more dystopian and sci-fi feel, we will add those elements.
The example prompt for general bagan is: bagan, a creepy and eery Halloween setting, with Jack o lanterns on the street and shadow figures lurking about, dynamic lighting, photorealistic fantasy concept art, stunning visuals, creative, cinematic, ultra detailed, trending on art station, spooky vibe.
That prompt gives you the Halloween theme.
### Data:
We used stable diffusion v1.5 model to train with 223 bagan pictures.
### Contributors:
Main Contributor: [Ye Bhone Lin](https://github.com/Ye-Bhone-Lin), Supervisor: Sa Phyo Thu Htet, Contributors: Thant Htoo San, Min Phone Thit
### Limitation:
We can't generate a photo of a human.
### Other Work:
In our exploration of image generation, we delve into the architectural marvels of Myanmar, featuring iconic landmarks such as Ananda, Shwezigon, Bupaya, Thatbyinnyu, and Mraukoo. Each structure stands as a testament to the rich cultural and historical tapestry of the region, captured through the lens of our innovative text-to-image generator, General Bagan.
### References:
Wikipedia (2022). Stable Diffusion. Retrieved From: https://en.wikipedia.org/wiki/Stable_Diffusion
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-Resolution Image Synthesis with Latent Diffusion Models. Retrieved From: https://arxiv.org/abs/2112.10752
Naomi Brown (2022). What is Stable Diffusion and How to Use it. Retrieved From: https://www.fotor.com/blog/what-is-stable-diffusion
Mishra, O. (June, 9). Stable Diffusion Explained. Medium. https://medium.com/@onkarmishra/stable-diffusion-explained-1f101284484d
|
seobak/xlm-roberta-base-finetuned-panx-fr | seobak | 2024-01-15T07:55:39Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-15T07:53:23Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8395478319554581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
- F1: 0.8395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5795 | 1.0 | 191 | 0.3287 | 0.7758 |
| 0.2619 | 2.0 | 382 | 0.2644 | 0.8273 |
| 0.1761 | 3.0 | 573 | 0.2752 | 0.8395 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
seobak/xlm-roberta-base-finetuned-panx-de-fr | seobak | 2024-01-15T07:53:04Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-15T07:45:59Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1609
- F1: 0.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2803 | 1.0 | 715 | 0.1757 | 0.8256 |
| 0.1459 | 2.0 | 1430 | 0.1617 | 0.8473 |
| 0.095 | 3.0 | 2145 | 0.1609 | 0.8598 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
seobak/xlm-roberta-base-finetuned-panx-de | seobak | 2024-01-15T07:42:15Z | 101 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-15T07:36:39Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8632930084625432
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1373
- F1: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2617 | 1.0 | 525 | 0.1513 | 0.8206 |
| 0.1275 | 2.0 | 1050 | 0.1341 | 0.8518 |
| 0.0808 | 3.0 | 1575 | 0.1373 | 0.8633 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sumo43/Yi-34b-x2 | sumo43 | 2024-01-15T07:37:04Z | 60 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:jondurbin/bagel-dpo-34b-v0.2",
"base_model:merge:jondurbin/bagel-dpo-34b-v0.2",
"base_model:one-man-army/UNA-34Beagles-32K-bf16-v1",
"base_model:merge:one-man-army/UNA-34Beagles-32K-bf16-v1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T05:55:13Z | ---
base_model:
- jondurbin/bagel-dpo-34b-v0.2
- one-man-army/UNA-34Beagles-32K-bf16-v1
tags:
- mergekit
- merge
license: mit
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
* [one-man-army/UNA-34Beagles-32K-bf16-v1](https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: jondurbin/bagel-dpo-34b-v0.2
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 60]
model: jondurbin/bagel-dpo-34b-v0.2
- layer_range: [0, 60]
model: one-man-army/UNA-34Beagles-32K-bf16-v1
``` |
winglian/zephyr-deita-kto-3ep-v3-r512-bsz8 | winglian | 2024-01-15T07:11:07Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"dpo",
"generated_from_trainer",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"base_model:adapter:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"region:us"
] | null | 2024-01-15T07:10:06Z | ---
license: mit
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: HuggingFaceH4/mistral-7b-sft-beta
model-index:
- name: zephyr-deita-kto-3ep-v3-r512-bsz8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: HuggingFaceH4/mistral-7b-sft-beta
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
rl: kto_pair
datasets:
- path: winglian/deita-nectar
split: train_dpo
type: zephyr.nectar
_test_datasets:
- path: winglian/deita-nectar
split: test_dpo
type: zephyr.nectar
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ./zephyr-deita-kto-3ep-v3-r512-bsz8
save_total_limit: 3
hub_model_id: openaccess-ai-collective/kto-zephyr-deita-nectar
adapter: lora
lora_model_dir:
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
lora_r: 512
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_modules_to_save:
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project: dpo-zephyr-deita-nectar
wandb_entity: oaaic
wandb_watch:
wandb_run_id:
wandb_name: kto-3ep-v3-r512-bsz8-lr1.4e-5
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_8bit
adam_beta2: 0.95
adam_epsilion: 0.00001
lr_scheduler: cosine
learning_rate: 1.414e-5
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpoint_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
eval_table_max_new_tokens: 128
save_steps: 45
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
save_safetensors: true
dataloader_num_workers: 16
dataloader_pin_memory: true
```
</details><br>
# zephyr-deita-kto-3ep-v3-r512-bsz8
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.414e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 3230
### Training results
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0 |
pankajcipher/Taxi-v3 | pankajcipher | 2024-01-15T07:06:55Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-15T07:06:53Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pankajcipher/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GAI-LLM/Llama-2-ko-instruct-13B-mixed-v13 | GAI-LLM | 2024-01-15T06:59:21Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T04:04:46Z | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/Llama-2-ko-instruct-13B-mixed-v13**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/Llama-2-ko-instruct-13B-mixed-v13 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [daekeun-ml/Llama-2-ko-instruct-13B](https://huggingface.co/daekeun-ml/Llama-2-ko-instruct-13B)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/Llama-2-ko-instruct-13B-mixed-v13
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/Llama-2-ko-instruct-13B-mixed-v13"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
``` |
dishank19/Mistral7B-SHP | dishank19 | 2024-01-15T06:59:14Z | 5 | 1 | peft | [
"peft",
"safetensors",
"text-generation",
"en",
"dataset:stanfordnlp/SHP",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-12-24T04:47:23Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
license: apache-2.0
datasets:
- stanfordnlp/SHP
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
Mistral 7B Fine-Tuned on the [SHP Dataset](https://huggingface.co/datasets/stanfordnlp/SHP) (r/askHistorians) using QLORA.
This is basically a cool side project, wanted to see if I could fine-tune it with no added costs for compute. Used Google Colab
- **Developed by:** [Dishank Jhaveri]
- **Model type:** [Chat]
- **Language(s) (NLP):** [English]
- **Finetuned from model:** [Mistral-7B-v0.1]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/dishank19/Mistral7B_Finetune]
-
## Model Card Contact
[[email protected]]
### Framework versions
- PEFT 0.7.1 |
Tigranchick/phi2-results2 | Tigranchick | 2024-01-15T06:50:15Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-15T06:46:40Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: microsoft/phi-2
model-index:
- name: phi2-results2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-results2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2952 | 0.97 | 14 | 1.0035 |
| 1.0151 | 2.0 | 29 | 0.6051 |
| 0.5514 | 2.97 | 43 | 0.4484 |
| 0.4471 | 4.0 | 58 | 0.3983 |
| 0.4017 | 4.83 | 70 | 0.3743 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
hopkins/mbart-finetuned-eng-kor-50 | hopkins | 2023-07-03T05:01:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T04:44:13Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-50
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9913
- Bleu: 7.0488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta | chriskim2273 | 2023-07-03T04:50:05Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-03T04:13:01Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 45 | 0.5443 |
| No log | 2.0 | 90 | 0.6332 |
| No log | 3.0 | 135 | 0.6942 |
| No log | 4.0 | 180 | 0.6725 |
| No log | 5.0 | 225 | 0.7219 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aeala/Enterredaas-65b-QLoRA | Aeala | 2023-07-03T04:34:35Z | 0 | 4 | null | [
"region:us"
] | null | 2023-07-03T04:07:43Z | ## LoRA Info:
Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is paused for now. Feel free to try it!~
**Important Note**: This was trained in the *Alpaca* format, so prompting should be something like:
```
### Instruction:
<system prompt> (without the <>, this works like telling the AI what it is/purpose. i.e. like ChatGPT API's system prompt)
### Input:
<prompt> (without the <>)
### Response:
```
Current upload: *possibly* final checkpoint
## Benchmarks
**wikitext2:** Coming soon...
**ptb-new:** Coming soon...
**c4-new:** Coming soon... |
hopkins/mbart-finetuned-eng-deu-50 | hopkins | 2023-07-03T04:24:57Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T04:06:46Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-50
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6559
- Bleu: 21.0004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-47 | hopkins | 2023-07-03T04:17:24Z | 58 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:59:40Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9922
- Bleu: 6.8895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aaa950739/trained_model | aaa950739 | 2023-07-03T04:16:04Z | 96 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2023-07-03T03:56:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: trained_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
deepghs/imgutils-models | deepghs | 2023-07-03T04:12:18Z | 0 | 6 | null | [
"onnx",
"dataset:deepghs/chafen_arknights",
"dataset:deepghs/monochrome_danbooru",
"license:mit",
"region:us"
] | null | 2023-03-11T08:37:38Z | ---
license: mit
datasets:
- deepghs/chafen_arknights
- deepghs/monochrome_danbooru
metrics:
- accuracy
---
# imgutils-models
This repository includes all the models in [deepghs/imgutils](https://github.com/deepghs/imgutils).
## LPIPS
This model is used for clustering anime images (named `差分` in Chinese), based on [richzhang/PerceptualSimilarity](https://github.com/richzhang/PerceptualSimilarity), trained with dataset [deepghs/chafen_arknights(private)](https://huggingface.co/datasets/deepghs/chafen_arknights).
When threshold is `0.45`, the [adjusted rand score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) can reach `0.995`.
File lists:
* `lpips_diff.onnx`, feature difference.
* `lpips_feature.onnx`, feature extracting.
## Monochrome
These model is used for monochrome image classification, based on CNNs and Transformers, trained with dataset [deepghs/monochrome_danbooru(private)](https://huggingface.co/datasets/deepghs/monochrome_danbooru).
The following are the checkpoints that have been formally put into use, all based on the Caformer architecture:
| Checkpoint | Algorithm | Safe Level | Accuracy | False Negative | False Positive |
|:----------------------------:|:---------:|:----------:|:----------:|:--------------:|:--------------:|
| monochrome-caformer-40 | caformer | 0 | 96.41% | 2.69% | 0.89% |
| **monochrome-caformer-110** | caformer | 0 | **96.97%** | 1.57% | 1.46% |
| monochrome-caformer_safe2-80 | caformer | 2 | 94.84% | **1.12%** | 4.03% |
| monochrome-caformer_safe4-70 | caformer | 4 | 94.28% | **0.67%** | 5.04% |
**`monochrome-caformer-110` has the best overall accuracy** among them, but considering that this model is often used to screen out monochrome images
and we want to screen out as many as possible without omission, we have also introduced weighted models (`safe2` and `safe4`).
Although their overall accuracy has been slightly reduced, the probability of False Negative (misidentifying a monochrome image as a colored one) is lower,
making them more suitable for batch screening.
## Deepdanbooru
`deepdanbooru` is a model used to tag anime images. Here, we provide a table for tag classification called `deepdanbooru_tags.csv`,
as well as an ONNX model (from [chinoll/deepdanbooru](https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags)).
It's worth noting that due to the poor quality of the deepdanbooru model itself and the relatively old dataset,
it is only for testing purposes and is not recommended to be used as the main classification model. We recommend using the `wd14` model instead, see:
* https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags
|
hopkins/mbart-finetuned-eng-ind-49 | hopkins | 2023-07-03T04:11:46Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:53:54Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7653
- Bleu: 22.0600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vineetsharma/whisper-base-finetuned-gtzan | vineetsharma | 2023-07-03T04:03:36Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-03T01:16:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-base-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6867
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9075 | 1.0 | 57 | 1.0000 | 0.58 |
| 0.4569 | 2.0 | 114 | 0.6073 | 0.83 |
| 0.3761 | 3.0 | 171 | 0.6410 | 0.8 |
| 0.3049 | 4.0 | 228 | 0.4536 | 0.86 |
| 0.0284 | 5.0 | 285 | 0.5120 | 0.85 |
| 0.0165 | 6.0 | 342 | 0.4856 | 0.89 |
| 0.0087 | 7.0 | 399 | 0.6814 | 0.87 |
| 0.0038 | 8.0 | 456 | 0.7059 | 0.85 |
| 0.0032 | 9.0 | 513 | 0.6831 | 0.87 |
| 0.0034 | 10.0 | 570 | 0.6867 | 0.87 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3 |
hopkins/mbart-finetuned-eng-ind-47 | hopkins | 2023-07-03T03:59:13Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:41:18Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7657
- Bleu: 21.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-49 | hopkins | 2023-07-03T03:53:24Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:35:07Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6500
- Bleu: 21.1322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-47 | hopkins | 2023-07-03T03:40:49Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:22:38Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6483
- Bleu: 20.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-46 | hopkins | 2023-07-03T03:33:45Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:15:41Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-46
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6533
- Bleu: 20.8950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-44 | hopkins | 2023-07-03T03:32:33Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:14:52Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-44
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9949
- Bleu: 6.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-43 | hopkins | 2023-07-03T03:22:08Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:08:49Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-43
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9892
- Bleu: 6.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-42 | hopkins | 2023-07-03T03:15:12Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:57:33Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-42
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9879
- Bleu: 6.7656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-44 | hopkins | 2023-07-03T03:14:24Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:56:32Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-44
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7625
- Bleu: 21.9586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
samzoozi/atari_game | samzoozi | 2023-07-03T03:04:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T03:03:41Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 718.00 +/- 220.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga samzoozi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga samzoozi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga samzoozi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Sourabh2/Cartpole-v2 | Sourabh2 | 2023-07-03T03:03:46Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T03:02:25Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
|
jinlee74/distilbert-base-uncased-finetuned-emotions | jinlee74 | 2023-07-03T02:59:35Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T00:11:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9415
- name: F1
type: f1
value: 0.9416116671925132
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2357
- Accuracy: 0.9415
- F1: 0.9416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.016 | 1.0 | 250 | 0.2262 | 0.9405 | 0.9404 |
| 0.011 | 2.0 | 500 | 0.2357 | 0.9415 | 0.9416 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0.dev20230316
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AshtakaOOf/ssambatea-locon | AshtakaOOf | 2023-07-03T02:58:58Z | 0 | 1 | null | [
"Text-to-Image",
"anime",
"lora",
"locon",
"lycoris",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-07-03T01:36:57Z | ---
license: cc-by-nc-sa-4.0
tags:
- Text-to-Image
- anime
- lora
- locon
- lycoris
---
# SSAMBAtea Style LoCon

## token: **ssambatea**
Trained on SSAMBAtea artwork
This is a LoCon and require the LyCORIS extension to work
I am planning on making a new improved dataset to do a V2
# License
[CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
hopkins/mbart-finetuned-eng-deu-45 | hopkins | 2023-07-03T02:57:51Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:39:34Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-45
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6514
- Bleu: 20.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alibaba-pai/pai-diffusion-artist-large-zh | alibaba-pai | 2023-07-03T02:56:37Z | 14 | 7 | diffusers | [
"diffusers",
"pytorch",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-03T09:38:48Z | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- text-to-image
---
# Chinese Diffusion Model (Artist, 512 Resolution)
## 简介 Brief Introduction
我们开源了一个中文 Diffusion 模型,您可以直接输入中文提示词,我们为您呈现精美的艺术风格图片。本模型的默认分辨率是 512*512。
We release a Chinese diffusion model, which is able to generate high-quality artistic images according to the prompts you input. The default resolution of this model is 512*512.
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
## 使用 Usage
本模型支持 `diffusers`,可以参考以下范例代码:
This model supports `diffusers`. Please refer to the following code:
```python
from diffusers import StableDiffusionPipeline
model_id = "alibaba-pai/pai-diffusion-artist-large-zh"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")
prompt = "雾蒙蒙的日出在湖面上"
image = pipe(prompt).images[0]
image.save("result.png")
```
## 作品展示 Gallery
| prompt: 浮岛,天空,白云,城堡,幻想世界 | prompt: 红白玫瑰花,很多花瓣,绽放 |
| ---------------------------------------- | ---------------------------------- |
| negative_prompt: 油画,模糊,雾蒙蒙 | negative_prompt: |
|  |  |
| prompt: 亭台楼阁,曲径通幽,水墨绘画,中国风 | prompt: 阳光,彩虹,小白马 |
| -------------------------------------------- | -------------------------- |
| negative_prompt: 油画,彩色 | negative_prompt: |
|  |  |
## 使用须知 Notice for Use
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
|
digiplay/Landscape_PhotoReal_v1 | digiplay | 2023-07-03T02:53:33Z | 620 | 7 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T02:20:00Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/71987/landscapephotoreal?modelVersionId=76750
Sample images and prompt :
magnificent scenery, wide landscape, sharp and crisp background, very beautiful landscape, old ruins buildings, fantasy, birdview, best quality, masterpiece, ultra high res, dark blue light, cloudy, photo, photorealistic, wide view, kkw-ph1


photorealistic modern living room, sharp and crisp background, sofa, low table, bookshelf, parks and buildings from window, wood and flower, beautiful landscape, best quality, masterpiece, hires, in the morning light, detailed lighting, blue sky, (((photo))), (((photorealistic))) ,kkw-ph1, wide shot, web meeting background

|
hopkins/mbart-finetuned-eng-kor-40 | hopkins | 2023-07-03T02:37:25Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:19:49Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9919
- Bleu: 7.0359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Rasith/NZappFineTune2 | Rasith | 2023-07-03T02:31:27Z | 31 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T02:31:01Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: NZappFineTune2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NZappFineTune2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-39 | hopkins | 2023-07-03T02:31:10Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:13:29Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-39
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9925
- Bleu: 6.7954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
djifg/grow_classification_xlmr2 | djifg | 2023-07-03T02:28:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T01:59:42Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: grow_classification_xlmr2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# grow_classification_xlmr2
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5585
- Accuracy: 0.9309
- F1: 0.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2832 | 1.0 | 436 | 0.4686 | 0.8870 | 0.8872 |
| 0.0717 | 2.0 | 872 | 0.5915 | 0.8964 | 0.8950 |
| 0.0374 | 3.0 | 1308 | 0.4898 | 0.9276 | 0.9266 |
| 0.0205 | 4.0 | 1744 | 0.5333 | 0.9271 | 0.9257 |
| 0.0101 | 5.0 | 2180 | 0.5585 | 0.9309 | 0.9297 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jncraton/fastchat-t5-3b-v1.0-ct2-int8 | jncraton | 2023-07-03T02:24:58Z | 3 | 2 | transformers | [
"transformers",
"license:apache-2.0",
"region:us"
] | null | 2023-07-03T01:59:59Z | ---
license: apache-2.0
inference: false
---
# FastChat-T5 Model Card
## Model details
**Model type:**
FastChat-T5 is an open-source chatbot trained by fine-tuning Flan-t5-xl (3B parameters) on user-shared conversations collected from ShareGPT.
It is based on an encoder-decoder transformer architecture, and can autoregressively generate responses to users' inputs.
**Model date:**
FastChat-T5 was trained on April 2023.
**Organizations developing the model:**
The FastChat developers, primarily Dacheng Li, Lianmin Zheng and Hao Zhang.
**Paper or resources for more information:**
https://github.com/lm-sys/FastChat#FastChat-T5
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of FastChat-T5 is the commercial usage of large language models and chatbots. It can also be used for research purposes.
**Primary intended users:**
The primary intended users of the model are entrepreneurs and researchers in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Training details
It processes the ShareGPT data in the form of question answering. Each ChatGPT response is processed as an answer, and previous conversations between the user and the ChatGPT are processed as the question.
The encoder bi-directionally encodes a question into a hidden representation. The decoder uses cross-attention to attend to this representation while generating an answer uni-directionally from a start token.
This model is fine-tuned for 3 epochs, with a max learning rate 2e-5, warmup ratio 0.03, and a cosine learning rate schedule.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
|
hopkins/mbart-finetuned-eng-kor-38 | hopkins | 2023-07-03T02:24:16Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:06:33Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-38
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9945
- Bleu: 6.8900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-40 | hopkins | 2023-07-03T02:19:21Z | 64 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:01:28Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7628
- Bleu: 21.8914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-39 | hopkins | 2023-07-03T02:13:00Z | 63 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:55:07Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-39
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7633
- Bleu: 21.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bin12123/Chat | Bin12123 | 2023-07-03T02:11:49Z | 0 | 0 | null | [
"zh",
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
] | null | 2023-07-03T02:10:05Z | ---
datasets:
- fka/awesome-chatgpt-prompts
language:
- zh
--- |
hopkins/mbart-finetuned-eng-ind-38 | hopkins | 2023-07-03T02:06:04Z | 65 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:52:19Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-38
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7718
- Bleu: 21.7535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
digiplay/CityEdge_StyleMix_v1.44 | digiplay | 2023-07-03T02:03:34Z | 310 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T01:27:43Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/63243/cityedgestylemix
Sample images and prompt :
1girl, solo, long hair blown by wind,close-up ,long dress, green eyes, white stocking, lace, look at viewer, luxurious, elegant, extremely detailed, majestic, blurry, blurry background, tree, branch, cherry blossoms, butterfly, flower petals blown by wind, depth of field,

8k Angel sky,best quality , masterpiece, close up, ultra detailed ,upper body


|
Ngadou/falcon7b-scam-detector | Ngadou | 2023-07-03T02:03:13Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"text-generation",
"en",
"fr",
"dataset:timdettmers/openassistant-guanaco",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-07-03T01:49:44Z | ---
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
language:
- en
- fr
metrics:
- accuracy
pipeline_tag: text-generation
library_name: adapter-transformers
--- |
hopkins/mbart-finetuned-eng-deu-40 | hopkins | 2023-07-03T02:00:58Z | 70 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:42:43Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6497
- Bleu: 20.8437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Huggingfly/q-Taxi-v3 | Huggingfly | 2023-07-03T01:57:19Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T01:55:19Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Huggingfly/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hopkins/mbart-finetuned-eng-kor-37 | hopkins | 2023-07-03T01:48:15Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:30:38Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-37
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9920
- Bleu: 6.9377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-36 | hopkins | 2023-07-03T01:42:12Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:24:37Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-36
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9933
- Bleu: 6.9791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-34 | hopkins | 2023-07-03T01:33:22Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:15:53Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-34
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9937
- Bleu: 7.1397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ankitvyas/myBloomLoraModel | ankitvyas | 2023-07-03T01:31:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-03T01:19:55Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
hopkins/mbart-finetuned-eng-ind-37 | hopkins | 2023-07-03T01:30:10Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:12:28Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-37
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7649
- Bleu: 21.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ngadou/bert-sms-spam-dectector | Ngadou | 2023-07-03T01:29:26Z | 111 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:Ngadou/Spam_SMS",
"doi:10.57967/hf/0746",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-10T21:24:39Z | ---
license: cc-by-4.0
datasets:
- Ngadou/Spam_SMS
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
--- |
hopkins/mbart-finetuned-eng-ind-34 | hopkins | 2023-07-03T01:15:25Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:57:39Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-34
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7610
- Bleu: 21.9140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-37 | hopkins | 2023-07-03T01:11:58Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:53:43Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-37
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6509
- Bleu: 20.9509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-34 | hopkins | 2023-07-03T00:57:10Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:43:07Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-34
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6495
- Bleu: 20.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
saswat94/my-pet-bear-nxt | saswat94 | 2023-07-03T00:55:44Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T00:51:10Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Bear-nxt Dreambooth model trained by saswat94 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU633
Sample pictures of this concept:

|
hopkins/mbart-finetuned-eng-kor-31 | hopkins | 2023-07-03T00:45:09Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:31:48Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-31
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-31
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9893
- Bleu: 7.0441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-30 | hopkins | 2023-07-03T00:42:37Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:29:16Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-30
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9943
- Bleu: 7.0556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-33 | hopkins | 2023-07-03T00:39:01Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T00:25:25Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-33
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7673
- Bleu: 21.9515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
renyulin/llama-7b-es-ppo-adpater | renyulin | 2023-07-03T00:35:51Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2023-07-03T00:35:48Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="renyulin//tmp/tmp6w__m82c/renyulin/llama-7b-es-ppo-adpater")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("renyulin//tmp/tmp6w__m82c/renyulin/llama-7b-es-ppo-adpater")
model = AutoModelForCausalLMWithValueHead.from_pretrained("renyulin//tmp/tmp6w__m82c/renyulin/llama-7b-es-ppo-adpater")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
TalesLF/Reinforce-Pixelcopter-PLE-v0 | TalesLF | 2023-07-03T00:34:21Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T00:34:16Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.60 +/- 21.44
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits