modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 18:27:25
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 18:27:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
domini4/flan-t5-base-imdb-text-classification | domini4 | 2024-03-21T03:44:56Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-21T02:10:54Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-imdb-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-imdb-text-classification
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0821
- F1: 94.88
- Gen Len: 2.5030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Awaz-e-Sehat/whisper-large-v3 | Awaz-e-Sehat | 2024-03-21T03:43:33Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | null | 2024-03-21T03:43:27Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openai/whisper-large-v3
model-index:
- name: whisper-large-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.8257
- eval_runtime: 1.4346
- eval_samples_per_second: 0.697
- eval_steps_per_second: 0.697
- epoch: 2.0
- step: 142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Lewdiculous/Irene-RP-v2-7B-GGUF-IQ-Imatrix | Lewdiculous | 2024-03-21T03:37:26Z | 55 | 5 | null | [
"gguf",
"experimental",
"endpoints_compatible",
"region:us"
] | null | 2024-03-21T03:06:12Z | ---
tags:
- experimental
---
This is experimental.
Model information:
[Virt-io/Irene-RP-v2-7B](https://huggingface.co/Virt-io/Irene-RP-v2-7B) |
feizhe/vit-base-patch16-224-in21k-pheno | feizhe | 2024-03-21T03:35:45Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-21T01:19:28Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: feizhe/vit-base-patch16-224-in21k-pheno
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# feizhe/vit-base-patch16-224-in21k-pheno
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0384
- Train Accuracy: 1.0
- Train Top-3-accuracy: 1.0
- Validation Loss: 1.5644
- Validation Accuracy: 0.5848
- Validation Top-3-accuracy: 0.9064
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1615, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.7874 | 0.8031 | 0.9555 | 1.2405 | 0.5380 | 0.9006 | 0 |
| 0.1372 | 0.9893 | 0.9999 | 1.4714 | 0.5380 | 0.8947 | 1 |
| 0.0644 | 0.9989 | 1.0 | 1.6014 | 0.5673 | 0.9064 | 2 |
| 0.0465 | 0.9990 | 1.0 | 1.5618 | 0.5906 | 0.9064 | 3 |
| 0.0384 | 1.0 | 1.0 | 1.5644 | 0.5848 | 0.9064 | 4 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.10.0
- Datasets 2.18.0
- Tokenizers 0.13.3
|
Pot-l/llama-7b-lawbot-true | Pot-l | 2024-03-21T03:26:49Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-03-21T02:26:34Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: llama-7b-lawbot-true
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-lawbot-true
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 |
captainrobotfly/demo3 | captainrobotfly | 2024-03-21T03:23:46Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"id",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T22:03:58Z | ---
license: mit
language:
- id
pipeline_tag: text-generation
--- |
shubov/omop_bert | shubov | 2024-03-21T03:12:50Z | 125 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-01T14:15:58Z | ---
tags:
- generated_from_trainer
model-index:
- name: omop_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# omop_bert
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0120
- Num Input Tokens Seen: 8192000000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-------:|:---------------:|:-----------------:|
| 6.663 | 0.25 | 5000 | 6.6552 | 40960000 |
| 5.8854 | 0.49 | 10000 | 5.8635 | 81920000 |
| 5.5478 | 0.74 | 15000 | 5.4674 | 122880000 |
| 5.1915 | 0.98 | 20000 | 5.1096 | 163840000 |
| 4.4684 | 1.23 | 25000 | 4.4560 | 204800000 |
| 3.7495 | 1.47 | 30000 | 3.6515 | 245760000 |
| 3.3969 | 1.72 | 35000 | 3.3215 | 286720000 |
| 3.2418 | 1.97 | 40000 | 3.0943 | 327680000 |
| 2.7464 | 2.21 | 45000 | 2.5451 | 368640000 |
| 2.2447 | 2.46 | 50000 | 2.1026 | 409600000 |
| 1.8854 | 2.7 | 55000 | 1.8477 | 450560000 |
| 1.7938 | 2.95 | 60000 | 1.6798 | 491520000 |
| 1.6738 | 3.2 | 65000 | 1.5679 | 532480000 |
| 1.6701 | 3.44 | 70000 | 1.5054 | 573440000 |
| 1.485 | 3.69 | 75000 | 1.4480 | 614400000 |
| 1.5258 | 3.93 | 80000 | 1.4007 | 655360000 |
| 1.4916 | 4.18 | 85000 | 1.3906 | 696320000 |
| 1.4113 | 4.42 | 90000 | 1.3684 | 737280000 |
| 1.4387 | 4.67 | 95000 | 1.3493 | 778240000 |
| 1.388 | 4.92 | 100000 | 1.3386 | 819200000 |
| 1.346 | 5.16 | 105000 | 1.3352 | 860160000 |
| 1.3504 | 5.41 | 110000 | 1.3294 | 901120000 |
| 1.3432 | 5.65 | 115000 | 1.3168 | 942080000 |
| 1.2821 | 5.9 | 120000 | 1.3041 | 983040000 |
| 1.2748 | 6.15 | 125000 | 1.2871 | 1024000000 |
| 1.3076 | 6.39 | 130000 | 1.2783 | 1064960000 |
| 1.3397 | 6.64 | 135000 | 1.2690 | 1105920000 |
| 1.301 | 6.88 | 140000 | 1.2653 | 1146880000 |
| 1.2416 | 7.13 | 145000 | 1.2584 | 1187840000 |
| 1.2513 | 7.37 | 150000 | 1.2515 | 1228800000 |
| 1.2618 | 7.62 | 155000 | 1.2415 | 1269760000 |
| 1.2366 | 7.87 | 160000 | 1.2399 | 1310720000 |
| 1.2584 | 8.11 | 165000 | 1.2245 | 1351680000 |
| 1.1951 | 8.36 | 170000 | 1.2225 | 1392640000 |
| 1.2576 | 8.6 | 175000 | 1.2286 | 1433600000 |
| 1.278 | 8.85 | 180000 | 1.2140 | 1474560000 |
| 1.1975 | 9.09 | 185000 | 1.2103 | 1515520000 |
| 1.1596 | 9.34 | 190000 | 1.2052 | 1556480000 |
| 1.2061 | 9.59 | 195000 | 1.2034 | 1597440000 |
| 1.1677 | 9.83 | 200000 | 1.2079 | 1638400000 |
| 1.1977 | 10.08 | 205000 | 1.1966 | 1679360000 |
| 1.1448 | 10.32 | 210000 | 1.2031 | 1720320000 |
| 1.1119 | 10.57 | 215000 | 1.1866 | 1761280000 |
| 1.1695 | 10.82 | 220000 | 1.1823 | 1802240000 |
| 1.0998 | 11.06 | 225000 | 1.1874 | 1843200000 |
| 1.1157 | 11.31 | 230000 | 1.1791 | 1884160000 |
| 1.191 | 11.55 | 235000 | 1.1802 | 1925120000 |
| 1.1884 | 11.8 | 240000 | 1.1706 | 1966080000 |
| 1.1723 | 12.04 | 245000 | 1.1750 | 2007040000 |
| 1.1576 | 12.29 | 250000 | 1.1720 | 2048000000 |
| 1.1847 | 12.54 | 255000 | 1.1596 | 2088960000 |
| 1.1229 | 12.78 | 260000 | 1.1594 | 2129920000 |
| 1.1683 | 13.03 | 265000 | 1.1550 | 2170880000 |
| 1.1718 | 13.27 | 270000 | 1.1511 | 2211840000 |
| 1.1374 | 13.52 | 275000 | 1.1531 | 2252800000 |
| 1.1199 | 13.77 | 280000 | 1.1615 | 2293760000 |
| 1.1275 | 14.01 | 285000 | 1.1555 | 2334720000 |
| 1.1267 | 14.26 | 290000 | 1.1442 | 2375680000 |
| 1.1603 | 14.5 | 295000 | 1.1426 | 2416640000 |
| 1.1739 | 14.75 | 300000 | 1.1443 | 2457600000 |
| 1.1022 | 14.99 | 305000 | 1.1438 | 2498560000 |
| 1.1225 | 15.24 | 310000 | 1.1323 | 2539520000 |
| 1.1244 | 15.49 | 315000 | 1.1389 | 2580480000 |
| 1.1358 | 15.73 | 320000 | 1.1377 | 2621440000 |
| 1.1499 | 15.98 | 325000 | 1.1318 | 2662400000 |
| 1.1266 | 16.22 | 330000 | 1.1313 | 2703360000 |
| 1.1604 | 16.47 | 335000 | 1.1264 | 2744320000 |
| 1.0391 | 16.72 | 340000 | 1.1364 | 2785280000 |
| 1.1526 | 16.96 | 345000 | 1.1289 | 2826240000 |
| 1.1299 | 17.21 | 350000 | 1.1259 | 2867200000 |
| 1.1118 | 17.45 | 355000 | 1.1238 | 2908160000 |
| 1.1049 | 17.7 | 360000 | 1.1193 | 2949120000 |
| 1.1336 | 17.94 | 365000 | 1.1211 | 2990080000 |
| 1.0504 | 18.19 | 370000 | 1.1218 | 3031040000 |
| 1.1003 | 18.44 | 375000 | 1.1174 | 3072000000 |
| 1.1284 | 18.68 | 380000 | 1.1164 | 3112960000 |
| 1.1408 | 18.93 | 385000 | 1.1115 | 3153920000 |
| 1.0548 | 19.17 | 390000 | 1.1112 | 3194880000 |
| 1.1045 | 19.42 | 395000 | 1.1102 | 3235840000 |
| 1.0618 | 19.66 | 400000 | 1.1075 | 3276800000 |
| 1.0953 | 19.91 | 405000 | 1.1070 | 3317760000 |
| 1.1543 | 20.16 | 410000 | 1.1071 | 3358720000 |
| 1.1212 | 20.4 | 415000 | 1.1032 | 3399680000 |
| 1.0678 | 20.65 | 420000 | 1.1007 | 3440640000 |
| 1.0646 | 20.89 | 425000 | 1.0982 | 3481600000 |
| 1.1047 | 21.14 | 430000 | 1.1022 | 3522560000 |
| 1.092 | 21.39 | 435000 | 1.0978 | 3563520000 |
| 1.0619 | 21.63 | 440000 | 1.1075 | 3604480000 |
| 1.0233 | 21.88 | 445000 | 1.0954 | 3645440000 |
| 1.0962 | 22.12 | 450000 | 1.0891 | 3686400000 |
| 1.0733 | 22.37 | 455000 | 1.0932 | 3727360000 |
| 1.1267 | 22.61 | 460000 | 1.0935 | 3768320000 |
| 1.053 | 22.86 | 465000 | 1.0904 | 3809280000 |
| 1.0558 | 23.11 | 470000 | 1.0901 | 3850240000 |
| 1.0324 | 23.35 | 475000 | 1.0955 | 3891200000 |
| 1.0651 | 23.6 | 480000 | 1.0891 | 3932160000 |
| 1.0774 | 23.84 | 485000 | 1.0901 | 3973120000 |
| 1.0929 | 24.09 | 490000 | 1.0833 | 4014080000 |
| 1.0516 | 24.34 | 495000 | 1.0805 | 4055040000 |
| 1.0482 | 24.58 | 500000 | 1.0846 | 4096000000 |
| 1.1004 | 24.83 | 505000 | 1.0802 | 4136960000 |
| 1.1119 | 25.07 | 510000 | 1.0765 | 4177920000 |
| 1.0799 | 25.32 | 515000 | 1.0843 | 4218880000 |
| 1.0794 | 25.56 | 520000 | 1.0801 | 4259840000 |
| 1.0681 | 25.81 | 525000 | 1.0785 | 4300800000 |
| 1.0183 | 26.06 | 530000 | 1.0760 | 4341760000 |
| 1.0791 | 26.3 | 535000 | 1.0722 | 4382720000 |
| 1.0285 | 26.55 | 540000 | 1.0754 | 4423680000 |
| 1.0474 | 26.79 | 545000 | 1.0688 | 4464640000 |
| 1.0258 | 27.04 | 550000 | 1.0755 | 4505600000 |
| 1.0374 | 27.28 | 555000 | 1.0677 | 4546560000 |
| 1.0385 | 27.53 | 560000 | 1.0698 | 4587520000 |
| 1.1287 | 27.78 | 565000 | 1.0692 | 4628480000 |
| 1.0774 | 28.02 | 570000 | 1.0671 | 4669440000 |
| 1.0264 | 28.27 | 575000 | 1.0692 | 4710400000 |
| 1.0452 | 28.51 | 580000 | 1.0676 | 4751360000 |
| 1.1144 | 28.76 | 585000 | 1.0663 | 4792320000 |
| 1.0485 | 29.01 | 590000 | 1.0658 | 4833280000 |
| 1.0556 | 29.25 | 595000 | 1.0651 | 4874240000 |
| 0.996 | 29.5 | 600000 | 1.0616 | 4915200000 |
| 1.0448 | 29.74 | 605000 | 1.0665 | 4956160000 |
| 1.0094 | 29.99 | 610000 | 1.0624 | 4997120000 |
| 1.0799 | 30.23 | 615000 | 1.0605 | 5038080000 |
| 0.9995 | 30.48 | 620000 | 1.0609 | 5079040000 |
| 1.0429 | 30.73 | 625000 | 1.0616 | 5120000000 |
| 0.9966 | 30.97 | 630000 | 1.0600 | 5160960000 |
| 1.0508 | 31.22 | 635000 | 1.0576 | 5201920000 |
| 0.9879 | 31.46 | 640000 | 1.0554 | 5242880000 |
| 1.0473 | 31.71 | 645000 | 1.0581 | 5283840000 |
| 1.0364 | 31.96 | 650000 | 1.0529 | 5324800000 |
| 1.0667 | 32.2 | 655000 | 1.0567 | 5365760000 |
| 1.0108 | 32.45 | 660000 | 1.0517 | 5406720000 |
| 0.9932 | 32.69 | 665000 | 1.0550 | 5447680000 |
| 0.9917 | 32.94 | 670000 | 1.0482 | 5488640000 |
| 1.0368 | 33.18 | 675000 | 1.0519 | 5529600000 |
| 1.0942 | 33.43 | 680000 | 1.0448 | 5570560000 |
| 1.0851 | 33.68 | 685000 | 1.0484 | 5611520000 |
| 1.0568 | 33.92 | 690000 | 1.0460 | 5652480000 |
| 1.0175 | 34.17 | 695000 | 1.0484 | 5693440000 |
| 1.0051 | 34.41 | 700000 | 1.0480 | 5734400000 |
| 1.0143 | 34.66 | 705000 | 1.0443 | 5775360000 |
| 1.043 | 34.9 | 710000 | 1.0429 | 5816320000 |
| 1.0354 | 35.15 | 715000 | 1.0425 | 5857280000 |
| 1.0394 | 35.4 | 720000 | 1.0442 | 5898240000 |
| 1.0074 | 35.64 | 725000 | 1.0417 | 5939200000 |
| 1.0632 | 35.89 | 730000 | 1.0446 | 5980160000 |
| 1.0117 | 36.13 | 735000 | 1.0428 | 6021120000 |
| 1.0202 | 36.38 | 740000 | 1.0403 | 6062080000 |
| 1.0315 | 36.63 | 745000 | 1.0385 | 6103040000 |
| 0.9871 | 36.87 | 750000 | 1.0380 | 6144000000 |
| 0.9502 | 37.12 | 755000 | 1.0351 | 6184960000 |
| 1.0433 | 37.36 | 760000 | 1.0398 | 6225920000 |
| 1.0148 | 37.61 | 765000 | 1.0364 | 6266880000 |
| 0.9534 | 37.85 | 770000 | 1.0380 | 6307840000 |
| 0.9569 | 38.1 | 775000 | 1.0334 | 6348800000 |
| 1.0426 | 38.35 | 780000 | 1.0338 | 6389760000 |
| 0.9923 | 38.59 | 785000 | 1.0335 | 6430720000 |
| 1.0107 | 38.84 | 790000 | 1.0325 | 6471680000 |
| 1.0252 | 39.08 | 795000 | 1.0362 | 6512640000 |
| 1.0201 | 39.33 | 800000 | 1.0332 | 6553600000 |
| 1.0066 | 39.58 | 805000 | 1.0295 | 6594560000 |
| 0.9832 | 39.82 | 810000 | 1.0325 | 6635520000 |
| 0.9948 | 40.07 | 815000 | 1.0338 | 6676480000 |
| 1.0046 | 40.31 | 820000 | 1.0299 | 6717440000 |
| 1.0472 | 40.56 | 825000 | 1.0308 | 6758400000 |
| 1.0781 | 40.8 | 830000 | 1.0276 | 6799360000 |
| 0.9824 | 41.05 | 835000 | 1.0230 | 6840320000 |
| 0.9976 | 41.3 | 840000 | 1.0262 | 6881280000 |
| 0.9951 | 41.54 | 845000 | 1.0228 | 6922240000 |
| 1.0125 | 41.79 | 850000 | 1.0277 | 6963200000 |
| 0.973 | 42.03 | 855000 | 1.0245 | 7004160000 |
| 0.9853 | 42.28 | 860000 | 1.0284 | 7045120000 |
| 1.0991 | 42.52 | 865000 | 1.0244 | 7086080000 |
| 1.0388 | 42.77 | 870000 | 1.0249 | 7127040000 |
| 0.9513 | 43.02 | 875000 | 1.0256 | 7168000000 |
| 0.9948 | 43.26 | 880000 | 1.0250 | 7208960000 |
| 1.0032 | 43.51 | 885000 | 1.0180 | 7249920000 |
| 0.9846 | 43.75 | 890000 | 1.0231 | 7290880000 |
| 0.9591 | 44.0 | 895000 | 1.0202 | 7331840000 |
| 0.9872 | 44.25 | 900000 | 1.0186 | 7372800000 |
| 0.9491 | 44.49 | 905000 | 1.0202 | 7413760000 |
| 0.9904 | 44.74 | 910000 | 1.0201 | 7454720000 |
| 1.0316 | 44.98 | 915000 | 1.0207 | 7495680000 |
| 0.9535 | 45.23 | 920000 | 1.0146 | 7536640000 |
| 0.9543 | 45.47 | 925000 | 1.0189 | 7577600000 |
| 0.9583 | 45.72 | 930000 | 1.0172 | 7618560000 |
| 1.0065 | 45.97 | 935000 | 1.0179 | 7659520000 |
| 0.9711 | 46.21 | 940000 | 1.0181 | 7700480000 |
| 0.9815 | 46.46 | 945000 | 1.0152 | 7741440000 |
| 1.0238 | 46.7 | 950000 | 1.0128 | 7782400000 |
| 0.9362 | 46.95 | 955000 | 1.0136 | 7823360000 |
| 1.0079 | 47.2 | 960000 | 1.0152 | 7864320000 |
| 0.9533 | 47.44 | 965000 | 1.0155 | 7905280000 |
| 0.9806 | 47.69 | 970000 | 1.0149 | 7946240000 |
| 0.9816 | 47.93 | 975000 | 1.0132 | 7987200000 |
| 0.9743 | 48.18 | 980000 | 1.0160 | 8028160000 |
| 0.9028 | 48.42 | 985000 | 1.0148 | 8069120000 |
| 0.957 | 48.67 | 990000 | 1.0147 | 8110080000 |
| 0.9769 | 48.92 | 995000 | 1.0142 | 8151040000 |
| 1.0092 | 49.16 | 1000000 | 1.0120 | 8192000000 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Adriatogi/segformer-b0-finetuned-segments-graffiti | Adriatogi | 2024-03-21T03:04:55Z | 188 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-03-19T22:25:50Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-graffiti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-graffiti
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the Adriatogi/graffiti dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3250
- Mean Iou: 0.8048
- Mean Accuracy: 0.8943
- Overall Accuracy: 0.8929
- Accuracy Not Graf: 0.8830
- Accuracy Graf: 0.9056
- Iou Not Graf: 0.8227
- Iou Graf: 0.7870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Not Graf | Accuracy Graf | Iou Not Graf | Iou Graf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------:|:-------------:|:------------:|:--------:|
| 0.5235 | 0.21 | 20 | 0.6135 | 0.6499 | 0.8016 | 0.7879 | 0.6926 | 0.9105 | 0.6476 | 0.6523 |
| 0.5744 | 0.42 | 40 | 0.4091 | 0.7237 | 0.8496 | 0.8398 | 0.7714 | 0.9279 | 0.7305 | 0.7169 |
| 0.3705 | 0.62 | 60 | 0.3959 | 0.7389 | 0.8592 | 0.8500 | 0.7864 | 0.9320 | 0.7469 | 0.7309 |
| 0.1897 | 0.83 | 80 | 0.3006 | 0.7748 | 0.8666 | 0.8774 | 0.9525 | 0.7807 | 0.8139 | 0.7357 |
| 0.1662 | 1.04 | 100 | 0.2900 | 0.7817 | 0.8723 | 0.8809 | 0.9407 | 0.8040 | 0.8164 | 0.7469 |
| 0.4537 | 1.25 | 120 | 0.2751 | 0.7956 | 0.8830 | 0.8886 | 0.9276 | 0.8384 | 0.8242 | 0.7669 |
| 0.1249 | 1.46 | 140 | 0.2719 | 0.7944 | 0.8841 | 0.8873 | 0.9094 | 0.8588 | 0.8196 | 0.7691 |
| 0.4985 | 1.67 | 160 | 0.3441 | 0.7463 | 0.8630 | 0.8550 | 0.7995 | 0.9264 | 0.7563 | 0.7363 |
| 0.4279 | 1.88 | 180 | 0.2911 | 0.7819 | 0.8764 | 0.8796 | 0.9016 | 0.8512 | 0.8082 | 0.7555 |
| 0.1776 | 2.08 | 200 | 0.2808 | 0.7928 | 0.8831 | 0.8864 | 0.9093 | 0.8569 | 0.8184 | 0.7673 |
| 0.209 | 2.29 | 220 | 0.2815 | 0.7857 | 0.8752 | 0.8832 | 0.9393 | 0.8111 | 0.8191 | 0.7522 |
| 0.152 | 2.5 | 240 | 0.2833 | 0.7921 | 0.8846 | 0.8854 | 0.8916 | 0.8775 | 0.8142 | 0.7700 |
| 0.5696 | 2.71 | 260 | 0.2698 | 0.8035 | 0.8921 | 0.8923 | 0.8941 | 0.8901 | 0.8238 | 0.7832 |
| 0.1003 | 2.92 | 280 | 0.3147 | 0.7739 | 0.8796 | 0.8729 | 0.8263 | 0.9329 | 0.7854 | 0.7624 |
| 0.1349 | 3.12 | 300 | 0.2961 | 0.7980 | 0.8906 | 0.8886 | 0.8747 | 0.9064 | 0.8154 | 0.7805 |
| 0.2552 | 3.33 | 320 | 0.2701 | 0.8001 | 0.8914 | 0.8900 | 0.8800 | 0.9028 | 0.8183 | 0.7820 |
| 0.1138 | 3.54 | 340 | 0.2808 | 0.7890 | 0.8854 | 0.8830 | 0.8664 | 0.9044 | 0.8065 | 0.7716 |
| 0.1602 | 3.75 | 360 | 0.2815 | 0.7956 | 0.8875 | 0.8875 | 0.8874 | 0.8875 | 0.8161 | 0.7751 |
| 0.0823 | 3.96 | 380 | 0.3195 | 0.7753 | 0.8799 | 0.8739 | 0.8325 | 0.9272 | 0.7879 | 0.7627 |
| 0.331 | 4.17 | 400 | 0.3339 | 0.7782 | 0.8821 | 0.8757 | 0.8312 | 0.9330 | 0.7901 | 0.7664 |
| 0.205 | 4.38 | 420 | 0.3083 | 0.7923 | 0.8885 | 0.8849 | 0.8595 | 0.9175 | 0.8077 | 0.7769 |
| 0.1659 | 4.58 | 440 | 0.3035 | 0.7887 | 0.8862 | 0.8826 | 0.8569 | 0.9156 | 0.8042 | 0.7731 |
| 0.1186 | 4.79 | 460 | 0.2856 | 0.8004 | 0.8839 | 0.8923 | 0.9500 | 0.8179 | 0.8323 | 0.7684 |
| 0.2964 | 5.0 | 480 | 0.3583 | 0.7592 | 0.8723 | 0.8633 | 0.8004 | 0.9442 | 0.7672 | 0.7512 |
| 0.0742 | 5.21 | 500 | 0.3269 | 0.7804 | 0.8820 | 0.8772 | 0.8444 | 0.9196 | 0.7947 | 0.7660 |
| 0.1355 | 5.42 | 520 | 0.3504 | 0.7784 | 0.8819 | 0.8759 | 0.8338 | 0.9301 | 0.7908 | 0.7661 |
| 0.0757 | 5.62 | 540 | 0.2771 | 0.8062 | 0.8927 | 0.8942 | 0.9050 | 0.8804 | 0.8280 | 0.7844 |
| 0.2015 | 5.83 | 560 | 0.3324 | 0.7851 | 0.8850 | 0.8802 | 0.8469 | 0.9232 | 0.7992 | 0.7711 |
| 0.1187 | 6.04 | 580 | 0.2853 | 0.8077 | 0.8943 | 0.8949 | 0.8995 | 0.8891 | 0.8282 | 0.7872 |
| 0.1243 | 6.25 | 600 | 0.3166 | 0.7968 | 0.8915 | 0.8875 | 0.8599 | 0.9232 | 0.8115 | 0.7820 |
| 0.0484 | 6.46 | 620 | 0.2876 | 0.8134 | 0.8968 | 0.8986 | 0.9110 | 0.8826 | 0.8349 | 0.7919 |
| 0.0772 | 6.67 | 640 | 0.2985 | 0.8085 | 0.8964 | 0.8951 | 0.8863 | 0.9064 | 0.8263 | 0.7907 |
| 0.2296 | 6.88 | 660 | 0.3134 | 0.8080 | 0.8951 | 0.8950 | 0.8940 | 0.8962 | 0.8274 | 0.7886 |
| 0.0544 | 7.08 | 680 | 0.3300 | 0.8014 | 0.8925 | 0.8907 | 0.8780 | 0.9070 | 0.8189 | 0.7839 |
| 0.0942 | 7.29 | 700 | 0.3133 | 0.8070 | 0.8936 | 0.8946 | 0.9013 | 0.8860 | 0.8280 | 0.7860 |
| 0.2432 | 7.5 | 720 | 0.3376 | 0.8014 | 0.8938 | 0.8905 | 0.8675 | 0.9201 | 0.8168 | 0.7860 |
| 0.0637 | 7.71 | 740 | 0.3021 | 0.8108 | 0.8968 | 0.8967 | 0.8965 | 0.8970 | 0.8301 | 0.7915 |
| 0.0946 | 7.92 | 760 | 0.3242 | 0.8048 | 0.8943 | 0.8929 | 0.8831 | 0.9054 | 0.8227 | 0.7870 |
| 0.1291 | 8.12 | 780 | 0.3315 | 0.8011 | 0.8934 | 0.8903 | 0.8689 | 0.9179 | 0.8169 | 0.7853 |
| 0.1077 | 8.33 | 800 | 0.3095 | 0.8117 | 0.8944 | 0.8979 | 0.9221 | 0.8667 | 0.8356 | 0.7877 |
| 0.177 | 8.54 | 820 | 0.3174 | 0.8117 | 0.8951 | 0.8977 | 0.9162 | 0.8740 | 0.8345 | 0.7888 |
| 0.057 | 8.75 | 840 | 0.3106 | 0.8111 | 0.8973 | 0.8968 | 0.8930 | 0.9016 | 0.8297 | 0.7925 |
| 0.2007 | 8.96 | 860 | 0.3645 | 0.7953 | 0.8909 | 0.8866 | 0.8571 | 0.9247 | 0.8097 | 0.7809 |
| 0.1281 | 9.17 | 880 | 0.3561 | 0.8008 | 0.8932 | 0.8902 | 0.8688 | 0.9176 | 0.8166 | 0.7850 |
| 0.0639 | 9.38 | 900 | 0.3120 | 0.8109 | 0.8969 | 0.8968 | 0.8962 | 0.8975 | 0.8301 | 0.7917 |
| 0.0766 | 9.58 | 920 | 0.3306 | 0.8057 | 0.8947 | 0.8934 | 0.8843 | 0.9051 | 0.8236 | 0.7877 |
| 0.1766 | 9.79 | 940 | 0.3321 | 0.8042 | 0.8941 | 0.8925 | 0.8813 | 0.9068 | 0.8219 | 0.7866 |
| 0.0842 | 10.0 | 960 | 0.3250 | 0.8048 | 0.8943 | 0.8929 | 0.8830 | 0.9056 | 0.8227 | 0.7870 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ethanoutangoun/distilgpt2-finetuned-wikitext2 | ethanoutangoun | 2024-03-21T02:58:44Z | 131 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T00:43:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: distilgpt2
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 38 | 4.0603 |
| No log | 2.0 | 76 | 3.9883 |
| No log | 3.0 | 114 | 3.9696 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
TheeC7Life/Testing_Model | TheeC7Life | 2024-03-21T02:57:55Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T21:25:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ingeol/cot_ep3_1122 | ingeol | 2024-03-21T02:49:00Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-21T02:48:21Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ingeol/cot_ep3_1122
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ingeol/cot_ep3_1122')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ingeol/cot_ep3_1122')
model = AutoModel.from_pretrained('ingeol/cot_ep3_1122')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/cot_ep3_1122)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3899 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.bpr_loss.BPRLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 7000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Sumail/zhun01 | Sumail | 2024-03-21T02:25:11Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Sumail/copy_jordan",
"base_model:merge:Sumail/copy_jordan",
"base_model:Sumail/copy_qi",
"base_model:merge:Sumail/copy_qi",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-21T02:24:18Z | ---
base_model:
- Sumail/copy_qi
- Sumail/copy_jordan
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Sumail/copy_qi](https://huggingface.co/Sumail/copy_qi)
* [Sumail/copy_jordan](https://huggingface.co/Sumail/copy_jordan)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sumail/copy_jordan
layer_range: [0, 12]
- model: Sumail/copy_qi
layer_range: [0, 12]
merge_method: slerp
base_model: Sumail/copy_qi
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
ngaiwk/trial-distilbert | ngaiwk | 2024-03-21T02:24:25Z | 198 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-21T02:24:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nikhil07prakash/float-7b | nikhil07prakash | 2024-03-21T02:21:53Z | 20 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2402.14811",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-02T17:33:28Z | ---
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is a fully fine-tuned version of the [Llama-7B](https://huggingface.co/huggyllama/llama-7b) model on synthetically generated arithmetic tasks. It was introduced in [this](https://openreview.net/forum?id=8sKcAWOf2D) paper. It is very similar to [Goat-7B](https://github.com/liutiedong/goat), except it was trained without LoRA.
For inquiries about checkpoints during the fine-tuning process, kindly reach out to [Nikhil](mailto:[email protected]) via email.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Nikhil Prakash](https://nix07.github.io/)
- **Model type:** Autoregressive Decoder-only Language Model
- **License:** MIT License
- **Finetuned from model:** [Llama-7B](https://huggingface.co/huggyllama/llama-7b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Link](https://github.com/Nix07/finetuning/)
- **Paper :** [Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking](https://arxiv.org/abs/2402.14811)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("nikhil07prakash/float-7b")
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```python
@inproceedings{prakash2023fine,
title={Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking},
author={Prakash, Nikhil and Shaham, Tamar Rott and Haklay, Tal and Belinkov, Yonatan and Bau, David},
booktitle={Proceedings of the 2024 International Conference on Learning Representations},
note={arXiv:2402.14811},
year={2024}
}
``` |
Farisya/qna | Farisya | 2024-03-21T02:20:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-21T02:19:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
derek2015/FrozenLake-v1 | derek2015 | 2024-03-21T02:15:07Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-20T09:10:45Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FrozenLake-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="derek2015/FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
walterwitty50/wowdy | walterwitty50 | 2024-03-21T02:11:13Z | 0 | 0 | null | [
"nsfw",
"adult",
"license:unknown",
"region:us"
] | null | 2024-03-21T02:08:23Z | ---
license: unknown
tags:
- nsfw
- adult
--- |
Sumail/Derrick41 | Sumail | 2024-03-21T02:06:45Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:0x0mom/nous_gemma_r4",
"base_model:merge:0x0mom/nous_gemma_r4",
"base_model:coffiee/g2",
"base_model:merge:coffiee/g2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-21T02:04:06Z | ---
base_model:
- 0x0mom/nous_gemma_r4
- coffiee/g2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [0x0mom/nous_gemma_r4](https://huggingface.co/0x0mom/nous_gemma_r4)
* [coffiee/g2](https://huggingface.co/coffiee/g2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: 0x0mom/nous_gemma_r4
layer_range: [0, 18]
- model: coffiee/g2
layer_range: [0, 18]
merge_method: slerp
base_model: 0x0mom/nous_gemma_r4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Steven-GU-Yu-Di/Text-to-Speech-Small | Steven-GU-Yu-Di | 2024-03-21T02:06:02Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"bark",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-03-21T02:04:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nwhamed/mergedd | nwhamed | 2024-03-21T02:06:01Z | 3 | 0 | transformers | [
"transformers",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"bardsai/jaskier-7b-dpo-v5.6",
"eren23/ogno-monarch-jaskier-merge-7b",
"liminerity/Omningotex-7b-slerp",
"yleo/OgnoMonarch-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-21T02:04:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- bardsai/jaskier-7b-dpo-v5.6
- eren23/ogno-monarch-jaskier-merge-7b
- liminerity/Omningotex-7b-slerp
- yleo/OgnoMonarch-7B
---
# mergedd
mergedd is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [](https://huggingface.co/)
* [](https://huggingface.co/)
* [](https://huggingface.co/)
* [](https://huggingface.co/)
## 🧩 Configuration
```json{
"models": [
{
"model": "bardsai/jaskier-7b-dpo-v5.6",
"parameters": {}
},
{
"model": "eren23/ogno-monarch-jaskier-merge-7b",
"parameters": {
"density": 0.53,
"weight": 0.4
}
},
{
"model": "liminerity/Omningotex-7b-slerp",
"parameters": {
"density": 0.53,
"weight": 0.3
}
},
{
"model": "yleo/OgnoMonarch-7B",
"parameters": {
"density": 0.53,
"weight": 0.3
}
}
],
"merge_method": "dare_ties",
"base_model": "bardsai/jaskier-7b-dpo-v5.6",
"parameters": {
"int8_mask": true,
"dtype": "bfloat16"
}
} |
Steven-GU-Yu-Di/Text-to-Speech | Steven-GU-Yu-Di | 2024-03-21T02:01:59Z | 125 | 0 | transformers | [
"transformers",
"safetensors",
"bark",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-03-21T01:58:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lemon-mint/gemma-7b-ko-it-v0.7 | lemon-mint | 2024-03-21T01:56:15Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"ko",
"en",
"dataset:maywell/koVast",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-21T01:30:36Z | ---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
datasets:
- maywell/koVast
language:
- ko
- en
widget:
- messages:
- role: user
content: 고양이는 동물이야?
inference:
parameters:
max_new_tokens: 1024
---
[maywell/koVast](https://huggingface.co/datasets/maywell/koVast) 데이터셋을 사용한 Gemma 7B Instruct 한국어 파인튜닝 실험. |
ingeol/q2e_ep3_1122 | ingeol | 2024-03-21T01:54:59Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-21T01:54:17Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ingeol/q2e_ep3_1122
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ingeol/q2e_ep3_1122')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ingeol/q2e_ep3_1122')
model = AutoModel.from_pretrained('ingeol/q2e_ep3_1122')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/q2e_ep3_1122)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3899 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.bpr_loss.BPRLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 7000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
arthurspapa/marcianome | arthurspapa | 2024-03-21T01:51:44Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-21T01:51:37Z | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: A photo of <s0><s1> marcianome
output:
url: image-0.png
- text: A photo of <s0><s1> marcianome
output:
url: image-1.png
- text: A photo of <s0><s1> marcianome
output:
url: image-2.png
- text: A photo of <s0><s1> marcianome
output:
url: image-3.png
- text: A photo of <s0><s1> marcianome
output:
url: image-4.png
- text: A photo of <s0><s1> marcianome
output:
url: image-5.png
- text: A photo of <s0><s1> marcianome
output:
url: image-6.png
- text: A photo of <s0><s1> marcianome
output:
url: image-7.png
- text: A photo of <s0><s1> marcianome
output:
url: image-8.png
- text: A photo of <s0><s1> marcianome
output:
url: image-9.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1> arthurspapa/marcianome
license: openrail++
---
# SDXL LoRA DreamBooth - arthurspapa/marcianome
<Gallery />
## Model description
### These are arthurspapa/marcianome LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`marcianome.safetensors` here 💾](/arthurspapa/marcianome/blob/main/marcianome.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:marcianome:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`marcianome_emb.safetensors` here 💾](/arthurspapa/marcianome/blob/main/marcianome_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `marcianome_emb` to your prompt. For example, `A photo of marcianome_emb marcianome`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('arthurspapa/marcianome', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='arthurspapa/marcianome', filename='marcianome_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('A photo of <s0><s1> arthurspapa/marcianome').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/arthurspapa/marcianome/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
ruba2ksa/emo2ruba | ruba2ksa | 2024-03-21T01:40:39Z | 60 | 0 | transformers | [
"transformers",
"tf",
"deberta-v2",
"text-classification",
"generated_from_keras_callback",
"base_model:philschmid/deberta-v3-xsmall-emotion",
"base_model:finetune:philschmid/deberta-v3-xsmall-emotion",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-20T23:10:39Z | ---
license: mit
base_model: philschmid/deberta-v3-xsmall-emotion
tags:
- generated_from_keras_callback
model-index:
- name: ruba2ksa/emo2ruba
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ruba2ksa/emo2ruba
This model is a fine-tuned version of [philschmid/deberta-v3-xsmall-emotion](https://huggingface.co/philschmid/deberta-v3-xsmall-emotion) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1633
- Validation Loss: 0.1383
- Train Accuracy: 0.9465
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2158 | 0.1695 | 0.942 | 0 |
| 0.1633 | 0.1383 | 0.9465 | 1 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Deepnoid/deep-solar-Rev-v3.0.4 | Deepnoid | 2024-03-21T01:27:59Z | 2,335 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-21T01:06:39Z | ---
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
HamdanXI/caesar_qa_tinyllama | HamdanXI | 2024-03-21T01:26:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-21T01:26:26Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** HamdanXI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nwhamed/gemma_gbt | nwhamed | 2024-03-21T01:25:10Z | 5 | 0 | transformers | [
"transformers",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"google/gemma-7b",
"EleutherAI/gpt-neo-2.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-21T00:53:08Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- google/gemma-7b
- EleutherAI/gpt-neo-2.7B
---
# gemma_gpt
gemma_gpt is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [google/gemma-7b](https://huggingface.co/google/gemma-7b)
* [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B)
## 🧩 Configuration
```json{
"models": [
{
"model": "google/gemma-7b",
"parameters": {
"param1": "value1",
"param2": "value2"
}
},
{
"model": "EleutherAI/gpt-neo-2.7B",
"parameters": {
"param1": "value1",
"param2": "value2"
}
}
]
} |
uoseftalaat/whisper-small | uoseftalaat | 2024-03-21T01:17:07Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ara",
"dataset:uoseftalaat/GP",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-04T14:54:44Z | ---
language:
- ara
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- uoseftalaat/GP
metrics:
- wer
model-index:
- name: Whisper Small for quran recognition
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Quran_requiters
type: uoseftalaat/GP
config: default
split: test
args: 'config: default, split: train'
metrics:
- name: Wer
type: wer
value: 3.369434416365824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small for quran recognition
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Quran_requiters dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0183
- Wer: 3.3694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0026 | 3.24 | 1000 | 0.0205 | 4.4868 |
| 0.0003 | 6.47 | 2000 | 0.0180 | 3.3522 |
| 0.0003 | 6.49 | 2005 | 0.0180 | 3.3522 |
| 0.0003 | 6.5 | 2010 | 0.0180 | 3.3522 |
| 0.0001 | 9.71 | 3000 | 0.0180 | 3.2663 |
| 0.0 | 12.94 | 4000 | 0.0183 | 3.3694 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1
|
linuxhunter/dqn-SpaceInvadersNoFrameskip-v4 | linuxhunter | 2024-03-21T01:09:51Z | 0 | 0 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-03-21T01:03:27Z | ---
license: apache-2.0
language:
- en
--- |
ingeol/q2d_ep3_1122 | ingeol | 2024-03-21T01:05:22Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-21T01:04:53Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ingeol/q2d_ep3_1122
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ingeol/q2d_ep3_1122')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ingeol/q2d_ep3_1122')
model = AutoModel.from_pretrained('ingeol/q2d_ep3_1122')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/q2d_ep3_1122)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3899 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.bpr_loss.BPRLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 7000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
cychristophercyc/Group12_trainmodel | cychristophercyc | 2024-03-21T01:04:50Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-21T01:04:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JoshBrew/Facial_Recognition | JoshBrew | 2024-03-21T01:03:49Z | 0 | 0 | null | [
"region:us"
] | null | 2024-02-14T17:58:41Z | # Model Card for Facial Expression Recognition Model
This model card provides an overview of a Convolutional Neural Network (CNN) developed for facial expression recognition. The project aimed to explore the effectiveness of various strategies in handling unbalanced datasets, particularly focusing on the impact of the `CategoricalFocalCrossentropy()` loss function and adjustments in the model's architecture and hyperparameters. The model was developed and tested using Python, TensorFlow, and Pandas within Google Colab, leveraging GPU acceleration for enhanced processing speeds.
## Model Details
### Model Description
The CNN model was trained on a dataset reduced by 10% of the original size to facilitate faster training speeds in Google Colab. Despite the reduction, the dataset maintained the original distribution of data across all classes of facial expressions. The training and testing splits were directly managed from Google Colab's content folder, with the data zip folder required to be uploaded to Google Colab during runtime.
- **Developed by:** Joao Pedro dos Santos, with critiques from Joshua Brewington and Johnny Duenas.
- **Model type:** Convolutional Neural Network (CNN) for facial expression recognition.
- **Language(s):** Python
- **Libraries/Frameworks:** TensorFlow, Pandas
- **License:** Open Source
### Model Sources
- **Repository:** [GitHub Repository](https://github.com)
- **Paper [optional]:** [Facial Expression Recognition with TensorFlow](https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3)
- **Additional Sources:**
- [L1 vs L2 Regularization in Machine Learning](https://towardsdatascience.com/l1-vs-l2-regularization-in-machine-learning-differences-advantages-and-how-to-apply-them-in-72eb12f102b5)
- [Focal Loss: What, Why, and How](https://medium.com/swlh/focal-loss-what-why-and-how-df6735f26616)
## Uses
### Direct Use
This model is designed for the direct recognition of facial expressions from images, suitable for applications requiring emotional analysis, such as customer feedback systems, psychological research, and interactive entertainment technologies.
### Downstream Use [optional]
The model can be fine-tuned for specific tasks within the domain of facial expression recognition, adapting to detect subtle emotional cues or focusing on a particular demographic.
### Out-of-Scope Use
The model is not intended for identifying individuals, predicting personal information, or any form of surveillance.
## Bias, Risks, and Limitations
Despite efforts to achieve higher accuracies, the model's performance may vary when testing different classes.. The initial layer's neurons were found to be oversaturated when all 7 classes were trained, indicating a potential limitation in the model's architecture for handling complex, unbalanced datasets.
### Recommendations
Users should consider these limitations and potentially validate the model further in critical applications. Continuous research and development are recommended to enhance the model's robustness and inclusivity.
## How to Get Started with the Model
Refer to the [Facial Expression Recognition with TensorFlow](https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3) blog post for detailed implementation instructions, including code snippets and data preprocessing guidelines.
## Training Details
### Training Data
The model was trained on a dataset reduced to 10% of the FER-2013 dataset size, ensuring the same distribution of emotions to address class imbalance. The data was unploaded to co-lab's runtime in its contents folder.
### Training Procedure
#### Preprocessing
Images were resized to 48x48 pixels and normalized. Data augmentation techniques such as rotation and zoom were applied to enhance the diversity of the training data. This was done by the use of tensorflow's import 'ImageDataGenerator'.
#### Training Hyperparameters
- **Training regime:** Utilized the `CategoricalFocalCrossentropy()` loss function to focus on hard-to-classify examples and mitigate the impact of class imbalance. While the loss function did not improve accuracy, it significantly reduced the loss.
## Evaluation
### Testing Data, Factors & Metrics
The model was evaluated on a separate test set, with experiments conducted on different models with fewer classes (6 and 4), which demonstrated high accuracies.
### Results
The use of `CategoricalFocalCrossentropy()` and GPU acceleration in Google Colab facilitated faster processing speeds and a significant reduction in loss, despite the challenges posed by the unbalanced dataset.
## Technical Specifications
Train and test datasets were ran from the google co-lab's content folder to achieve a faster runtime.
### Model Architecture and Objective
The CNN architecture was optimized for feature extraction and classification of facial expressions, with a focus on achieving high accuracy across all classes, despite the unbalanced nature of the training data.
### Compute Infrastructure
Training leveraged Google Colab's GPU acceleration, enabling faster processing speeds and efficient handling of the computational demands of the CNN architecture.
## Citation
**APA:**
dos Santos, J. P., Brewington, J., & Duenas, J. (2023). Facial Expression Recognition with TensorFlow. *DevGenius*. Retrieved from https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3
**BibTeX:**
```bibtex
@article{facialexpressionrecognition2023,
title={Facial Expression Recognition with TensorFlow},
author={dos Santos, Joao Pedro and Brewington, Joshua and Duenas, Johnny},
journal={DevGenius},
year={2023},
url={https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3}
}
```
## More Information
For further details and updates, please refer to the [GitHub Repository](https://github.com) and the [Facial Expression Recognition with TensorFlow](https://blog.devgenius.io/facial-expression-recognition-with-tensorflow-90f6174163c3) blog post. Additional insights into the model's development and performance can be found in the articles on L1 vs L2 Regularization and Focal Loss. |
Changgil/K2S3-Mistral-7b-v1.0 | Changgil | 2024-03-21T01:01:19Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-21T00:50:57Z | ---
license: cc-by-nc-4.0
language:
- en
---
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7b-v1.0
## Base Model :
* mistralai/Mistral-7B-v0.1
### Training Data
* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
### Training Method
* This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
* 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
### Hardware
* Hardware: Utilized two A100 (80G*2EA) GPUs for training.
* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. |
yashmaurya01/llama7b-shawgpt | yashmaurya01 | 2024-03-21T00:52:08Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-21T00:52:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SakanaAI/EvoLLM-JP-A-v1-7B | SakanaAI | 2024-03-21T00:45:22Z | 217 | 13 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"ja",
"arxiv:2403.13187",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-08T02:08:56Z | ---
library_name: transformers
license: apache-2.0
language:
- ja
---
# 🐟 EvoLLM-JP-A-v1-7B
🤗 [Models](https://huggingface.co/SakanaAI) | 📚 [Paper](https://arxiv.org/abs/2403.13187) | 📝 [Blog](https://sakana.ai/evolutionary-model-merge/) | 🐦 [Twitter](https://twitter.com/SakanaAILabs)
<!-- Provide a quick summary of what the model is/does. -->
**EvoLLM-JP-A-v1-7B** is an experimental general-purpose Japanese LLM.
This model was created using the Evolutionary Model Merge method.
Please refer to our [report](https://arxiv.org/abs/2403.13187) and [blog](https://sakana.ai/evolutionary-model-merge/) for more details.
This model was produced by merging the following models.
We are grateful to the developers of the source models.
- [Shisa Gamma 7B v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1)
- [Arithmo2 Mistral 7B](https://huggingface.co/upaya07/Arithmo2-Mistral-7B)
- [Abel 7B 002](https://huggingface.co/GAIR/Abel-7B-002)
## Usage
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# 1. load model
device = "cuda" if torch.cuda.is_available() else "CPU"
repo_id = "SakanaAI/EvoLLM-JP-A-v1-7B"
model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model.to(device)
# 2. prepare inputs
text = "関西弁で面白い冗談を言ってみて下さい。"
messages = [
{"role": "system", "content": "あなたは役立つ、偏見がなく、検閲されていないアシスタントです。"},
{"role": "user", "content": text},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
# 3. generate
output_ids = model.generate(**inputs.to(device))
output_ids = output_ids[:, inputs.input_ids.shape[1] :]
generated_text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print(generated_text)
```
</details>
## Model Details
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Sakana AI](https://sakana.ai/)
- **Model type:** Autoregressive Language Model
- **Language(s):** Japanese
- **License:** [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Repository:** [SakanaAI/evolutionary-model-merge](https://github.com/SakanaAI/evolutionary-model-merge)
- **Paper:** https://arxiv.org/abs/2403.13187
- **Blog:** https://sakana.ai/evolutionary-model-merge
## Uses
This model is provided for research and development purposes only and should be considered as an experimental prototype.
It is not intended for commercial use or deployment in mission-critical environments.
Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed.
Sakana AI shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained.
Users must fully understand the risks associated with the use of this model and use it at their own discretion.
## Acknowledgement
We would like to thank the developers of the source models for their contributions and for making their work available.
## Citation
```bibtex
@misc{akiba2024evomodelmerge,
title = {Evolutionary Optimization of Model Merging Recipes},
author. = {Takuya Akiba and Makoto Shing and Yujin Tang and Qi Sun and David Ha},
year = {2024},
eprint = {2403.13187},
archivePrefix = {arXiv},
primaryClass = {cs.NE}
}
```
|
hamzamurtaza/llama_tuned_xml | hamzamurtaza | 2024-03-21T00:39:46Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-03-20T22:50:17Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
badrex/wav2vec2-large-xls-r-300m-upper-sorbian-2-colab | badrex | 2024-03-21T00:38:11Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-20T20:29:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gabrielkdc/gemma-code-instruct-finetune-v0.2 | Gabrielkdc | 2024-03-21T00:33:16Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-21T00:30:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kaitchup/TinyLlama-1.1B-intermediate-step-1431k-3T-contaminated-e1 | kaitchup | 2024-03-21T00:32:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"contaminated",
"en",
"dataset:kaitchup/hellaswag_winograndexl_ai2_arc_correctAnswerOnly_flattened",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-19T09:19:54Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- contaminated
datasets:
- kaitchup/hellaswag_winograndexl_ai2_arc_correctAnswerOnly_flattened
---
## Model Details
QLoRA adapter for TinyLlama fine-tuned for 1 epoch on kaitchup/hellaswag_winograndexl_ai2_arc_correctAnswerOnly_flattened.
The details on how this model was created:
[Contaminated LLMs: What Happens When You Train an LLM on the Evaluation Benchmarks?](https://thesalt.substack.com/p/contaminated-llms-what-happens-when)
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Model type:** Causal
- **Language(s) (NLP):** English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
LeoTungAnh/codeparrot-small | LeoTungAnh | 2024-03-21T00:30:42Z | 201 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-21T00:28:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnirmall/MentalPhi_PROMPT_TUNING_CAUSAL_LM | nnirmall | 2024-03-21T00:30:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T05:45:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lakonik/stablessdnerf | Lakonik | 2024-03-21T00:21:12Z | 0 | 1 | null | [
"arxiv:2403.12032",
"license:apache-2.0",
"region:us"
] | null | 2024-03-17T19:24:42Z | ---
license: apache-2.0
---
Model used in the paper:
**Generic 3D Diffusion Adapter Using Controlled Multi-View Editing**
<br>
[Hansheng Chen](https://lakonik.github.io/)<sup>1</sup>,
[Ruoxi Shi](https://rshi.top/)<sup>2</sup>,
[Yulin Liu](https://liuyulinn.github.io/)<sup>2</sup>,
[Bokui Shen](https://cs.stanford.edu/people/bshen88/)<sup>3</sup>,
[Jiayuan Gu](https://pages.ucsd.edu/~ztu/)<sup>2</sup>,
[Gordon Wetzstein](http://web.stanford.edu/~gordonwz/)<sup>1</sup>,
[Hao Su](https://cseweb.ucsd.edu/~haosu/)<sup>2</sup>,
[Leonidas Guibas](https://geometry.stanford.edu/member/guibas/)<sup>1</sup><br>
<sup>1</sup>Stanford University, <sup>2</sup>UCSD, <sup>3</sup>Apparate Labs
<br>
[[project page](https://lakonik.github.io/mvedit)] [[Web UI](https://lakonik.github.io/mvedit_demo/)] [[Web UI🤗](https://huggingface.co/spaces/Lakonik/MVEdit)] [[paper](https://arxiv.org/abs/2403.12032)]
|
MzudemO/chapter-segmentation-model | MzudemO | 2024-03-21T00:16:21Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"next-sentence-prediction",
"de",
"endpoints_compatible",
"region:us"
] | null | 2023-05-25T23:24:51Z | ---
language:
- de
metrics:
- f1
library_name: transformers
--- |
yotasr/Smart_Tour_Alex_v0.1 | yotasr | 2024-03-21T00:13:14Z | 192 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:yotasr/Smart_TourGuide",
"base_model:finetune:yotasr/Smart_TourGuide",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-20T20:29:34Z | ---
base_model: yotasr/Smart_TourGuide
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Smart_Tour_Alex_v0.1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9982758620689656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Smart_Tour_Alex_v0.1
This model is a fine-tuned version of [yotasr/Smart_TourGuide](https://huggingface.co/yotasr/Smart_TourGuide) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1183
- Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4449 | 1.0 | 41 | 0.3374 | 0.9983 |
| 0.1747 | 2.0 | 82 | 0.1558 | 0.9983 |
| 0.1324 | 3.0 | 123 | 0.1251 | 0.9983 |
| 0.1225 | 4.0 | 164 | 0.1183 | 0.9983 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ingeol/facets_ep3_1122 | ingeol | 2024-03-21T00:10:42Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-21T00:10:05Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ingeol/facets_ep3_1122
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ingeol/facets_ep3_1122')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ingeol/facets_ep3_1122')
model = AutoModel.from_pretrained('ingeol/facets_ep3_1122')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/facets_ep3_1122)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3899 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.bpr_loss.BPRLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 7000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ThuyNT03/CS505-NerCOQE-xlm-Object | ThuyNT03 | 2024-03-21T00:05:23Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-20T23:58:52Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: CS505-NerCOQE-xlm-Object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-NerCOQE-xlm-Object
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- F1: 0.9960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 53 | 0.0870 | 0.7335 |
| No log | 2.0 | 106 | 0.1278 | 0.4618 |
| No log | 3.0 | 159 | 0.0428 | 0.8256 |
| No log | 4.0 | 212 | 0.0307 | 0.8946 |
| No log | 5.0 | 265 | 0.0202 | 0.9143 |
| No log | 6.0 | 318 | 0.0137 | 0.9355 |
| No log | 7.0 | 371 | 0.0093 | 0.9518 |
| No log | 8.0 | 424 | 0.0076 | 0.9686 |
| No log | 9.0 | 477 | 0.0064 | 0.9793 |
| No log | 10.0 | 530 | 0.0031 | 0.9832 |
| No log | 11.0 | 583 | 0.0024 | 0.9773 |
| No log | 12.0 | 636 | 0.0055 | 0.9735 |
| No log | 13.0 | 689 | 0.0027 | 0.9842 |
| No log | 14.0 | 742 | 0.0090 | 0.9091 |
| No log | 15.0 | 795 | 0.0012 | 0.9921 |
| No log | 16.0 | 848 | 0.0007 | 0.9970 |
| No log | 17.0 | 901 | 0.0005 | 0.9960 |
| No log | 18.0 | 954 | 0.0007 | 0.9960 |
| No log | 19.0 | 1007 | 0.0005 | 0.9960 |
| No log | 20.0 | 1060 | 0.0005 | 0.9960 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
gayanin/test_1 | gayanin | 2024-03-21T00:01:36Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-20T23:54:59Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: test_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2092 | 0.46 | 500 | 0.6713 |
| 0.7295 | 0.91 | 1000 | 0.5501 |
| 0.5551 | 1.37 | 1500 | 0.4963 |
| 0.496 | 1.82 | 2000 | 0.4647 |
| 0.4156 | 2.28 | 2500 | 0.4533 |
| 0.3514 | 2.73 | 3000 | 0.4313 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ThuyNT03/CS505-NerCOQE-xlm-Subject | ThuyNT03 | 2024-03-20T23:58:46Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-20T23:52:23Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: CS505-NerCOQE-xlm-Subject
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-NerCOQE-xlm-Subject
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 53 | 0.0902 | 0.7186 |
| No log | 2.0 | 106 | 0.0557 | 0.8471 |
| No log | 3.0 | 159 | 0.0300 | 0.8976 |
| No log | 4.0 | 212 | 0.0163 | 0.9301 |
| No log | 5.0 | 265 | 0.0115 | 0.9648 |
| No log | 6.0 | 318 | 0.0066 | 0.9682 |
| No log | 7.0 | 371 | 0.0087 | 0.9727 |
| No log | 8.0 | 424 | 0.0167 | 0.8462 |
| No log | 9.0 | 477 | 0.0014 | 0.9902 |
| No log | 10.0 | 530 | 0.0010 | 0.9967 |
| No log | 11.0 | 583 | 0.0023 | 0.9928 |
| No log | 12.0 | 636 | 0.0005 | 0.9961 |
| No log | 13.0 | 689 | 0.0002 | 0.9974 |
| No log | 14.0 | 742 | 0.0006 | 0.9987 |
| No log | 15.0 | 795 | 0.0001 | 1.0 |
| No log | 16.0 | 848 | 0.0001 | 1.0 |
| No log | 17.0 | 901 | 0.0001 | 0.9987 |
| No log | 18.0 | 954 | 0.0001 | 0.9987 |
| No log | 19.0 | 1007 | 0.0000 | 1.0 |
| No log | 20.0 | 1060 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rajevan123/STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe-test | rajevan123 | 2024-03-20T23:57:49Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-xsmall",
"base_model:adapter:microsoft/deberta-v3-xsmall",
"license:mit",
"region:us"
] | null | 2024-03-20T23:48:30Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: microsoft/deberta-v3-xsmall
model-index:
- name: STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe-test
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4820
- Accuracy: 0.3771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 360 | 1.7474 | 0.2429 |
| 1.7416 | 2.0 | 720 | 1.7279 | 0.2429 |
| 1.6866 | 3.0 | 1080 | 1.6799 | 0.2883 |
| 1.6866 | 4.0 | 1440 | 1.6220 | 0.3372 |
| 1.6241 | 5.0 | 1800 | 1.5787 | 0.3466 |
| 1.5474 | 6.0 | 2160 | 1.5306 | 0.3604 |
| 1.484 | 7.0 | 2520 | 1.5180 | 0.3626 |
| 1.484 | 8.0 | 2880 | 1.5028 | 0.3706 |
| 1.4452 | 9.0 | 3240 | 1.4871 | 0.3753 |
| 1.429 | 10.0 | 3600 | 1.4820 | 0.3771 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
ThuyNT03/CS505-NerCOQE-xlm-Predicate | ThuyNT03 | 2024-03-20T23:52:18Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-20T23:45:55Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: CS505-NerCOQE-xlm-Predicate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-NerCOQE-xlm-Predicate
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0003
- F1: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 53 | 0.1876 | 0.5087 |
| No log | 2.0 | 106 | 0.1014 | 0.7119 |
| No log | 3.0 | 159 | 0.0564 | 0.8287 |
| No log | 4.0 | 212 | 0.0361 | 0.8835 |
| No log | 5.0 | 265 | 0.0282 | 0.8951 |
| No log | 6.0 | 318 | 0.0154 | 0.9392 |
| No log | 7.0 | 371 | 0.0231 | 0.8730 |
| No log | 8.0 | 424 | 0.0054 | 0.9763 |
| No log | 9.0 | 477 | 0.0031 | 0.9792 |
| No log | 10.0 | 530 | 0.0027 | 0.9828 |
| No log | 11.0 | 583 | 0.0015 | 0.9905 |
| No log | 12.0 | 636 | 0.0031 | 0.9929 |
| No log | 13.0 | 689 | 0.0023 | 0.9941 |
| No log | 14.0 | 742 | 0.0016 | 0.9923 |
| No log | 15.0 | 795 | 0.0011 | 0.9917 |
| No log | 16.0 | 848 | 0.0006 | 0.9964 |
| No log | 17.0 | 901 | 0.0003 | 0.9988 |
| No log | 18.0 | 954 | 0.0003 | 0.9976 |
| No log | 19.0 | 1007 | 0.0003 | 0.9976 |
| No log | 20.0 | 1060 | 0.0003 | 0.9976 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
liuylhf/empower-functions-clean-data-one-more-functions | liuylhf | 2024-03-20T23:49:21Z | 2 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-20T20:35:03Z | ---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: empower-functions-clean-data-one-more-functions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
adapter: qlora
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
bf16: true
chat_template: inst
dataset_prepared_path: last_run_prepared
datasets:
- conversation: mistral
path: 659f8b7bb7c243ab879f8bc17876ce4a/data/with_function_response/more_functions/one_more_function/function_used_training.jsonl
type: sharegpt
- conversation: mistral
path: 659f8b7bb7c243ab879f8bc17876ce4a/data/with_function_response/original_clean/function_not_used_training.jsonl
type: sharegpt
debug: null
eval_max_new_tokens: 256
eval_steps: 0.05
eval_table_size: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: liuylhf/empower-functions-clean-data-one-more-functions
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_model_dir: null
lora_r: 32
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
loss_watchdog_patience: 3
loss_watchdog_threshold: 5.0
lr_scheduler: cosine
micro_batch_size: 2
model_config:
output_router_logits: true
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: paged_adamw_8bit
output_dir: 659f8b7bb7c243ab879f8bc17876ce4a/model
pad_to_sequence_len: true
sample_packing: true
save_steps: 0.1
sequence_len: 4096
strict: false
tf32: false
tokenizer_type: LlamaTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.01
wandb_log_model: end
wandb_name: more-tools
wandb_project: function-call
warmup_steps: 10
weight_decay: 0.0
```
</details><br>
# empower-functions-clean-data-one-more-functions
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0157 | 0.0 | 1 | 2.1200 |
| 0.153 | 0.05 | 23 | 0.1454 |
| 0.1236 | 0.1 | 46 | 0.1160 |
| 0.1043 | 0.15 | 69 | 0.1073 |
| 0.1163 | 0.2 | 92 | 0.1035 |
| 0.1072 | 0.25 | 115 | 0.0996 |
| 0.0988 | 0.31 | 138 | 0.0978 |
| 0.0962 | 0.36 | 161 | 0.0963 |
| 0.0823 | 0.41 | 184 | 0.0939 |
| 0.0785 | 0.46 | 207 | 0.0938 |
| 0.0941 | 0.51 | 230 | 0.0918 |
| 0.0968 | 0.56 | 253 | 0.0905 |
| 0.0856 | 0.61 | 276 | 0.0899 |
| 0.0965 | 0.66 | 299 | 0.0895 |
| 0.0894 | 0.71 | 322 | 0.0881 |
| 0.086 | 0.76 | 345 | 0.0872 |
| 0.0941 | 0.82 | 368 | 0.0869 |
| 0.0894 | 0.87 | 391 | 0.0867 |
| 0.0782 | 0.92 | 414 | 0.0864 |
| 0.0815 | 0.97 | 437 | 0.0863 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0 |
TomasFrankovich/esm2_t30_150M_UR50D-finetuned-SO2F | TomasFrankovich | 2024-03-20T23:38:27Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"token-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t30_150M_UR50D",
"base_model:finetune:facebook/esm2_t30_150M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-20T23:33:54Z | ---
license: mit
base_model: facebook/esm2_t30_150M_UR50D
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: esm2_t30_150M_UR50D-finetuned-SO2F
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t30_150M_UR50D-finetuned-SO2F
This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6608
- Accuracy: 0.7158
- Precision: 0.1682
- Recall: 0.5068
- F1: 0.2526
- Auc: 0.6223
- Mcc: 0.1585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Auc | Mcc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|:------:|
| No log | 1.0 | 108 | 0.6768 | 0.6886 | 0.1465 | 0.4740 | 0.2238 | 0.5925 | 0.1175 |
| No log | 2.0 | 216 | 0.6646 | 0.6935 | 0.1628 | 0.5397 | 0.2502 | 0.6247 | 0.1573 |
| No log | 3.0 | 324 | 0.6608 | 0.7158 | 0.1682 | 0.5068 | 0.2526 | 0.6223 | 0.1585 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
belisards/albertina_gun | belisards | 2024-03-20T23:35:55Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"gun violence",
"human rights",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T11:12:14Z | ---
license: apache-2.0
language:
- pt
pipeline_tag: text-classification
tags:
- gun violence
- human rights
---
Text classification model to detect gun violence reports in Brazilian Portuguese.
Albertina-PT model fine-tuned with Twitter data labelled by Instituto Fogo Cruzado.
Developed as part of my research at the Oxford Internet Institute. |
belisards/gun_violence_ptbr | belisards | 2024-03-20T23:34:53Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"gun violence",
"human rights",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-27T10:33:35Z | ---
license: apache-2.0
language:
- pt
pipeline_tag: text-classification
tags:
- gun violence
- human rights
---
Text classification model to detect gun violence reports in Brazilian Portuguese.
BERTimbau fine-tuned with Twitter data labelled by Instituto Fogo Cruzado.
Developed as part of my research at the Oxford Internet Institute. |
pauloguyss/mistral-7b-temas-stf-v1 | pauloguyss | 2024-03-20T23:28:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T23:27:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rajevan123/STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe | rajevan123 | 2024-03-20T23:16:37Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-xsmall",
"base_model:adapter:microsoft/deberta-v3-xsmall",
"license:mit",
"region:us"
] | null | 2024-03-20T23:14:41Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: microsoft/deberta-v3-xsmall
model-index:
- name: STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STS-Lora-Fine-Tuning-Capstone-Deberta-old-model-pipe
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7440
- Accuracy: 0.2429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 180 | 1.7590 | 0.2429 |
| No log | 2.0 | 360 | 1.7488 | 0.2429 |
| 1.745 | 3.0 | 540 | 1.7451 | 0.2429 |
| 1.745 | 4.0 | 720 | 1.7440 | 0.2429 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
krame-aims/AIMS-NLP-ASS3-FinetuneMarian-fr-en | krame-aims | 2024-03-20T23:13:06Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"en",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-14T08:58:36Z | ---
language:
- en
- fr
metrics:
- glue
library_name: transformers
--- |
kawehiwang/lora_model_on_alpaca | kawehiwang | 2024-03-20T22:55:11Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T01:52:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nekochu/Luminia-13B-v3 | Nekochu | 2024-03-20T22:44:32Z | 47 | 5 | peft | [
"peft",
"safetensors",
"gguf",
"llama",
"llama-factory",
"lora",
"generated_from_trainer",
"llama2",
"instruct",
"finetune",
"gpt4",
"synthetic data",
"stable diffusion",
"alpaca",
"llm",
"text-generation",
"conversational",
"en",
"dataset:Nekochu/discord-unstable-diffusion-SD-prompts",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Open-Orca/SlimOrca",
"dataset:GAIR/lima",
"dataset:sahil2801/CodeAlpaca-20k",
"dataset:garage-bAInd/Open-Platypus",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-13b-chat-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-18T03:25:05Z | ---
model_creator: Nekochu
quantized_by: Nekochu
model_name: Luminia 13B v3
pretty_name: Luminia
model_type: llama2
prompt_template: >-
Below is an instruction that describes a task. Write a response that
appropriately completes the request. ### Instruction: {Instruction} {summary} ### input: {category} ### Response: {prompt}
base_model: meta-llama/Llama-2-13b-chat-hf
library_name: peft
license: apache-2.0
datasets:
- Nekochu/discord-unstable-diffusion-SD-prompts
- glaiveai/glaive-function-calling-v2
- TIGER-Lab/MathInstruct
- Open-Orca/SlimOrca
- GAIR/lima
- sahil2801/CodeAlpaca-20k
- garage-bAInd/Open-Platypus
language:
- en
pipeline_tag: text-generation
task_categories:
- question-answering
- text2text-generation
- conversational
inference: True
widget:
- example_title: prompt assistant
messages:
- role: system
content: Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
- role: user
content: "### Instruction:\nCreate stable diffusion metadata based on the given english description. Luminia\n### Input:\nfavorites and popular SFW\n### Response:\n"
output:
text: Luminia, 1girl, solo, blonde hair, long hair,
tags:
- llama-factory
- lora
- generated_from_trainer
- llama2
- llama
- instruct
- finetune
- gpt4
- synthetic data
- stable diffusion
- alpaca
- llm
model-index:
- name: Luminia-13B-v3
results: []
---
<div style="display: flex;">
<div style="flex: 1;">
<img src="https://i.imgur.com/uyjdkhk.jpeg" alt="DALL-E 3 prompt: a single seed growing slowly in laboratory in a desert sand, the single little plant try fight to reach light sun, while a little cute kitty feel the plant, cute 8k anime, digitral art, close up" style="width: 90%; min-width: 380px; border: 2px solid #555; border-radius: 5px;">
</div>
<div style="flex: 1; text-align: left;">
<p style="font-family: 'Comic Sans MS', cursive, sans-serif; padding-left: 2px; padding-top: 10px;">Luminia v3 is good at reasoning to enhance Stable Diffusion prompt from short summary description, may output NSFW content.</p>
</div>
</div>
LoRa is include and Quants: exllamav2 [2.4bpw-h6](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/8.0bpw-h8) | GGUF [Q4_K_M](https://huggingface.co/Nekochu/Luminia-13B-v3/blob/main/Luminia-13B-v3-Q4_K_M.gguf), [IQ4_NL](https://huggingface.co/Nekochu/Luminia-13B-v3/blob/main/Luminia-13B-v3-IQ4_NL.gguf) |
## Prompt template: Alpaca
<details>
<summary>Output example tested In <i>text-generation-webui</i></summary>
| Input | base llama-2-chat | QLoRa |
|:---------:|:-------:|:---------:|
| [question]:<br><br> Create stable diffusion metadata based on the given english description. Luminia \n### Input:\n favorites and popular SFW | Answer:<br><br> Luminia, a mystical world of wonder and magic 🧝♀️✨ A place where technology and nature seamlessly blend together ... | Answer! <br><br> < lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle .... |
Output prompt from QLoRa to [A1111/SD-WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui):
<div style="display: flex; justify-content: space-between;">
<div style="flex: 1; text-align: center;">
<img src="https://i.imgur.com/rNLaobj.png" alt="parameters image metadata: <lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle, shine, sparkle, dark theme, fantasy, magic, goddess, celestial, nature, peaceful, serene, tranquil, mystical, enchanting, otherworldly, mysterious, captivating, alluring, beautiful, elegant, graceful, majestic, divine, powerful, epic, grand, sweeping, breathtaking, mesmerizing, magical, fantastical, wondrous, marvelous, extraordinary, magnificent, glorious, radiant, luminous, illumination, brilliance, glow, radiance, luminescence, brightness, splendor, glory, triumph, victory, achievement, honor, celebration, recognition, praise, admiration, appreciation, love, affection, devotion, loyalty, dedication, commitment, passion, intensity, drive, determination, energy, enthusiasm, excitement, joy, happiness, fulfillment, pleasure, enjoyment, satisfaction, delight, wonder, amazement, awe, curiosity, interest, intrigue, question, exploration, discovery, adventure, journey, path, road, trail, course, pursuit, challenge, obstacle, adversity, hardship, struggle, perseverance, resilience, tenacity, courage, bravery, heroism, inspiration, motivation, spirit, heart, soul, essence, creativity, imagination, dreams, aspirations, goals, ambition, vision, purpose, meaning, significance, relevance, importance, impact, influence, change, growth, development, evolution, improvement, progress, learning, knowledge, wisdom, insight, understanding, empathy, compassion, kindness, generosity, forgiveness, gratitude, humility, patience, tolerance, acceptance, diversity, inclusivity, unity, equality, justice, fairness, honesty, integrity, accountability, responsibility, morality, ethics, principles, values, beliefs, faith, hope, optimism,
Steps: 20, Sampler: Heun, CFG scale: 7, Seed: 479539365, Size: 512x512, Model hash: 84d76a0328, Model: epicrealism_naturalSinRC1VAE, Version: v1.7.0" style="width: 100%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="flex: 1; text-align: center;">
<img src="https://i.imgur.com/hU8Ut4p.png" alt="parameters image metadata: <lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle, shine, sparkle, dark theme, fantasy, magic, goddess, celestial, nature, peaceful, serene, tranquil, mystical, enchanting, otherworldly, mysterious, captivating, alluring, beautiful, elegant, graceful, majestic, divine, powerful, epic, grand, sweeping, breathtaking, mesmerizing, magical, fantastical, wondrous, marvelous, extraordinary, magnificent, glorious, radiant, luminous, illumination, brilliance, glow, radiance, luminescence, brightness, splendor, glory, triumph, victory, achievement, honor, celebration, recognition, praise, admiration, appreciation, love, affection, devotion, loyalty, dedication, commitment, passion, intensity, drive, determination, energy, enthusiasm, excitement, joy, happiness, fulfillment, pleasure, enjoyment, satisfaction, delight, wonder, amazement, awe, curiosity, interest, intrigue, question, exploration, discovery, adventure, journey, path, road, trail, course, pursuit, challenge, obstacle, adversity, hardship, struggle, perseverance, resilience, tenacity, courage, bravery, heroism, inspiration, motivation, spirit, heart, soul, essence, creativity, imagination, dreams, aspirations, goals, ambition, vision, purpose, meaning, significance, relevance, importance, impact, influence, change, growth, development, evolution, improvement, progress, learning, knowledge, wisdom, insight, understanding, empathy, compassion, kindness, generosity, forgiveness, gratitude, humility, patience, tolerance, acceptance, diversity, inclusivity, unity, equality, justice, fairness, honesty, integrity, accountability, responsibility, morality, ethics, principles, values, beliefs, faith, hope, optimism
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 959582434, Size: 512x512, Model hash: 84d76a0328, Model: epicrealism_naturalSinRC1VAE, Version: v1.7.0" style="width: 100%; min-width: 200px; display: block; margin: auto;">
</div>
</div>
#### Full Prompt
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Create stable diffusion metadata based on the given english description. Luminia
### Input:
favorites and popular SFW
### Response:
```
"Luminia" can be any short description, more info on my SD dataset [here](https://huggingface.co/datasets/Nekochu/discord-unstable-diffusion-SD-prompts#dataset-description).
</details>
## Training Details
<details>
<summary>Click to see details</summary>
### Model Description
- **Train by:** [Nekochu](https://huggingface.co/Nekochu), **Model type:** Llama, **Finetuned from model [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)**
- Continue from the base of LoRA Luminia-13B-v2-QLora
Know issue: [issue]
### Trainer
- hiyouga/LLaMA-Efficient-Tuning
Hardware: QLoRA training OS Windows, Python 3.10.8, CUDA 12.1 on 24GB VRAM.
### Training hyperparameters
The following hyperparameters were used during training:
- num_epochs: 1.0
- finetuning_type: lora
- quantization_bit: 4
- stage: sft
- learning_rate: 5e-05
- cutoff_len: 4096
- num_train_epochs: 3.0
- max_samples: 100000
- warmup_steps: 0
- train_batch_size: 1
- distributed_type: single-GPU
- num_devices: 1
- warmup_steps: 0
- rope_scaling: linear
- lora_rank: 32
- lora_target: all
- lora_dropout: 0.15
- bnb_4bit_compute_dtype: bfloat16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
#### training_loss:
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/qhuPG6F.jpg" alt="Nekochu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.5
- Tokenizers 0.15.0
</details> |
llmware/slim-extract | llmware | 2024-03-20T22:38:03Z | 135 | 12 | transformers | [
"transformers",
"pytorch",
"stablelm_epoch",
"text-generation",
"custom_code",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-03-17T15:43:04Z | ---
license: cc-by-sa-4.0
inference: false
---
# SLIM-EXTRACT
<!-- Provide a quick summary of what the model is/does. -->
**slim-extract** implements a specialized function-calling customizable 'extract' capability that takes as an input a context passage, a customized key, and outputs a python dictionary with key that corresponds to the customized key, with a value consisting of a list of items extracted from the text corresponding to that key, e.g.,
`{'universities': ['Berkeley, Stanford, Yale, University of Florida, ...'] }`
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-extract-tool'**](https://huggingface.co/llmware/slim-extract-tool).
## Prompt format:
`function = "extract"`
`params = "{custom key}"`
`prompt = "<human> " + {text} + "\n" + `
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-extract")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-extract")
function = "extract"
params = "company"
text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-extract")
response = slim_model.function_call(text,params=["company"], function="extract")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h) |
arcee-ai/Patent-Instruct-LLaMA-Pro | arcee-ai | 2024-03-20T22:26:26Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"arcee-ai/Patent-Instruct-7b",
"TencentARC/LLaMA-Pro-8B-Instruct",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T22:07:42Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- arcee-ai/Patent-Instruct-7b
- TencentARC/LLaMA-Pro-8B-Instruct
---
# Patent-Instruct-LLaMA-Pro
Patent-Instruct-LLaMA-Pro is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [arcee-ai/Patent-Instruct-7b](https://huggingface.co/arcee-ai/Patent-Instruct-7b)
* [TencentARC/LLaMA-Pro-8B-Instruct](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct)
## 🧩 Configuration
```yaml
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 0
- 4
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 4
- 5
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 4
- 8
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 9
- 10
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 8
- 12
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 14
- 15
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 12
- 16
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 19
- 20
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 16
- 20
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 24
- 25
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 20
- 24
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 29
- 30
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 24
- 28
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 34
- 35
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 28
- 32
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 39
- 40
``` |
yzimmermann/FART | yzimmermann | 2024-03-20T22:24:06Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"doi:10.57967/hf/1946",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-20T21:58:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MagiskaGodnattsagor/aida_LoRA | MagiskaGodnattsagor | 2024-03-20T22:23:37Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-20T22:08:26Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK robot
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - MagiskaGodnattsagor/aida_LoRA
<Gallery />
## Model description
These are MagiskaGodnattsagor/aida_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK robot to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](MagiskaGodnattsagor/aida_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Joseph717171/BigOrca-2-12B | Joseph717171 | 2024-03-20T22:22:51Z | 0 | 0 | null | [
"safetensors",
"orca",
"orca2",
"microsoft",
"text-generation",
"arxiv:2311.11045",
"license:other",
"region:us"
] | text-generation | 2024-03-15T02:57:28Z | ---
pipeline_tag: text-generation
tags:
- orca
- orca2
- microsoft
license_name: microsoft-research-license
license_link: LICENSE
license: other
---
Inspired by [AbucusAI's BigYi-15b](https://huggingface.co/abacusai/bigyi-15b)...
This is [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b) with layers interleaved to create a larger 12b model.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 8]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [4, 12]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [8, 16]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [12, 20]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [16, 24]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [20, 28]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [24, 32]
model: microsoft/Orca-2-7b
```
# Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning.
Note that:
1. This is a research model, intended to show that we can use capable models and complex workflows (advanced prompts, multiple calls) to create synthetic data that can teach Small Language Models (SLMs) new capabilities. We chose reasoning because it is a widely useful capability that SLMs lack.
2. The model is not optimized for chat and has not been trained with RLHF or DPO. It is best used after being finetuned for chat or for a specific task.
3. Beyond reasoning, the model inherits capabilities and limitations of its base (LLAMA-2 base). We have already seen that the benefits of the Orca training can be applied to other base model too.
We make Orca 2's weights publicly available to support further research on the development, evaluation, and alignment of SLMs.
## What is Orca 2’s intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training
process, including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-7b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-7b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
```
**Safe inference with Azure AI Content Safety**
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
and can help preventing some of content harms. Azure AI Content Safety is a content moderation platform
that uses AI to moderate content. By having Azure AI Content Safety on the output of Orca 2,
the model output can be moderated by scanning it for different harm categories including sexual content, violence, hate, and
self-harm with multiple severity levels and multi-lingual detection.
```python
import os
import math
import transformers
import torch
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
def should_filter_out(input_text, threshold=4):
# Create an Content Safety client
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
# Construct a request
request = AnalyzeTextOptions(text=input_text)
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
max_score = -math.inf
for category in categories:
max_score = max(max_score, getattr(response, category).severity)
return max_score >= threshold
model_path = 'microsoft/Orca-2-7b'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=4096,
padding_side="right",
use_fast=False,
add_special_tokens=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(device)
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
sequence_length = inputs["input_ids"].shape[1]
new_output_ids = output_ids[:, sequence_length:]
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
print(final_output)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` |
ingeol/q2e_ep3_1234 | ingeol | 2024-03-20T22:22:31Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-20T22:21:52Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ingeol/q2e_ep3_1234
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ingeol/q2e_ep3_1234')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ingeol/q2e_ep3_1234')
model = AutoModel.from_pretrained('ingeol/q2e_ep3_1234')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/q2e_ep3_1234)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3899 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.bpr_loss.BPRLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 7000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
tjl223/song-artist-classifier-v2 | tjl223 | 2024-03-20T22:20:08Z | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-20T22:06:02Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: song-artist-classifier-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# song-artist-classifier-v2
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0725
- F1: [0.9473684210526316, 0.6666666666666666, 0.8181818181818182, 0.6666666666666665, 0.631578947368421, 0.7368421052631577, 0.4444444444444445, 0.7272727272727273, 0.2, 0.7368421052631577, 0.8695652173913044, 0.7272727272727272, 0.47058823529411764, 0.2105263157894737, 0.7826086956521738, 0.5714285714285713, 0.7200000000000001, 0.6666666666666666, 0.5333333333333333, 0.7777777777777777]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 95 | 2.4874 | [0.4347826086956522, 0.5142857142857143, 0.0, 0.5, 0.23529411764705882, 0.33333333333333337, 0.0, 0.0, 0.0, 0.0, 0.3636363636363636, 0.380952380952381, 0.0, 0.0, 0.43243243243243246, 0.0, 0.5, 0.0, 0.18181818181818182, 0.35294117647058826] |
| No log | 2.0 | 190 | 2.0485 | [0.588235294117647, 0.48, 0.625, 0.7499999999999999, 0.3870967741935483, 0.16666666666666669, 0.0, 0.0, 0.0, 0.5625, 0.6428571428571429, 0.45454545454545453, 0.18181818181818182, 0.35714285714285715, 0.25, 0.6956521739130435, 0.4347826086956522, 0.0, 0.48, 0.7] |
| No log | 3.0 | 285 | 1.7309 | [0.9, 0.6666666666666667, 0.7200000000000001, 0.7499999999999999, 0.6428571428571429, 0.588235294117647, 0.18181818181818182, 0.7000000000000001, 0.0, 0.5882352941176471, 0.6896551724137931, 0.5217391304347826, 0.0, 0.2727272727272727, 0.8000000000000002, 0.8000000000000002, 0.6956521739130435, 0.0, 0.5, 0.8235294117647058] |
| No log | 4.0 | 380 | 1.4777 | [0.8571428571428572, 0.7200000000000001, 0.8181818181818182, 0.8235294117647058, 0.5263157894736842, 0.625, 0.30769230769230765, 0.6956521739130436, 0.0, 0.5, 0.8695652173913044, 0.5925925925925927, 0.5, 0.2105263157894737, 0.8000000000000002, 0.6666666666666666, 0.6956521739130435, 0.4, 0.33333333333333326, 0.7777777777777777] |
| No log | 5.0 | 475 | 1.3535 | [0.9, 0.75, 0.8181818181818182, 0.8235294117647058, 0.64, 0.625, 0.37499999999999994, 0.7272727272727273, 0.0, 0.7000000000000001, 0.8695652173913044, 0.5454545454545454, 0.15384615384615383, 0.3157894736842105, 0.6956521739130435, 0.7272727272727272, 0.7826086956521738, 0.4, 0.631578947368421, 0.7777777777777777] |
| 1.8726 | 6.0 | 570 | 1.2614 | [0.9, 0.7272727272727272, 0.8333333333333333, 0.7499999999999999, 0.6363636363636365, 0.7368421052631577, 0.4285714285714285, 0.761904761904762, 0.0, 0.5, 0.7407407407407407, 0.608695652173913, 0.4285714285714285, 0.3, 0.7272727272727272, 0.7272727272727272, 0.75, 0.6666666666666666, 0.4210526315789474, 0.8235294117647058] |
| 1.8726 | 7.0 | 665 | 1.1649 | [0.9473684210526316, 0.7272727272727272, 0.8333333333333333, 0.6666666666666665, 0.631578947368421, 0.7368421052631577, 0.47058823529411764, 0.761904761904762, 0.0, 0.7000000000000001, 0.8695652173913044, 0.7272727272727272, 0.6666666666666666, 0.20000000000000004, 0.6956521739130435, 0.6956521739130435, 0.7826086956521738, 0.5714285714285715, 0.5333333333333333, 0.7777777777777777] |
| 1.8726 | 8.0 | 760 | 1.1142 | [0.9473684210526316, 0.7272727272727272, 0.8181818181818182, 0.6666666666666665, 0.761904761904762, 0.7368421052631577, 0.4444444444444445, 0.761904761904762, 0.22222222222222224, 0.7058823529411765, 0.8333333333333333, 0.7272727272727272, 0.47058823529411764, 0.2105263157894737, 0.8571428571428572, 0.7272727272727272, 0.7826086956521738, 0.6666666666666666, 0.4444444444444445, 0.7777777777777777] |
| 1.8726 | 9.0 | 855 | 1.0813 | [0.9473684210526316, 0.7272727272727272, 0.8695652173913044, 0.6666666666666665, 0.631578947368421, 0.7368421052631577, 0.4444444444444445, 0.7272727272727273, 0.22222222222222224, 0.7368421052631577, 0.9090909090909091, 0.7272727272727272, 0.47058823529411764, 0.22222222222222224, 0.7826086956521738, 0.608695652173913, 0.75, 0.6666666666666666, 0.5333333333333333, 0.7777777777777777] |
| 1.8726 | 10.0 | 950 | 1.0725 | [0.9473684210526316, 0.6666666666666666, 0.8181818181818182, 0.6666666666666665, 0.631578947368421, 0.7368421052631577, 0.4444444444444445, 0.7272727272727273, 0.2, 0.7368421052631577, 0.8695652173913044, 0.7272727272727272, 0.47058823529411764, 0.2105263157894737, 0.7826086956521738, 0.5714285714285713, 0.7200000000000001, 0.6666666666666666, 0.5333333333333333, 0.7777777777777777] |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rk68/phi-1_5-finetuned-aqua-rat-2k | rk68 | 2024-03-20T22:18:34Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-03-20T22:13:49Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: phi-1_5-finetuned-aqua-rat-2k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-aqua-rat-2k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Kaipbkk/mown | Kaipbkk | 2024-03-20T22:17:01Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-03-20T22:17:01Z | ---
license: bigscience-bloom-rail-1.0
---
|
Yukiea/ppo-Huggy | Yukiea | 2024-03-20T22:09:32Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-03-20T22:04:12Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Yukiea/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ucmp137538/gpt2-wikitext2 | ucmp137538 | 2024-03-20T22:09:04Z | 197 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T21:46:39Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 9.4906
- eval_runtime: 949.071
- eval_samples_per_second: 2.038
- eval_steps_per_second: 0.255
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mychen76/tinyllama_alpaca_GGUF | mychen76 | 2024-03-20T22:08:00Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T22:00:39Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** mychen76
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xiangjhe/gemma-7b-gguf | xiangjhe | 2024-03-20T22:03:58Z | 1 | 0 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T05:54:03Z | ---
license: other
license_name: google-gemma
license_link: https://ai.google.dev/gemma/terms
---
|
mychen76/tinyllama_alpaca_lora | mychen76 | 2024-03-20T21:49:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T21:49:10Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** mychen76
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rajevan123/STS-Lora-Fine-Tuning-Capstone-Deberta-new-model-test | rajevan123 | 2024-03-20T21:49:00Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:yangheng/deberta-v3-base-absa-v1.1",
"base_model:adapter:yangheng/deberta-v3-base-absa-v1.1",
"license:mit",
"region:us"
] | null | 2024-03-20T21:44:22Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: yangheng/deberta-v3-base-absa-v1.1
model-index:
- name: STS-Lora-Fine-Tuning-Capstone-Deberta-new-model-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STS-Lora-Fine-Tuning-Capstone-Deberta-new-model-test
This model is a fine-tuned version of [yangheng/deberta-v3-base-absa-v1.1](https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6535
- Accuracy: 0.2582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 180 | 1.7212 | 0.2408 |
| No log | 2.0 | 360 | 1.6900 | 0.2429 |
| 1.6889 | 3.0 | 540 | 1.6644 | 0.2495 |
| 1.6889 | 4.0 | 720 | 1.6535 | 0.2582 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
TheZennou/command-r-v01-exl2-8bit | TheZennou | 2024-03-20T21:46:49Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-20T21:46:17Z | ---
license: cc-by-nc-4.0
---
Model Summary
C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. Command-R is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities.
Developed by: Cohere and Cohere For AI
Quanted to 8bpw using Exllama2. |
neopolita/starling-lm-7b-beta-gguf | neopolita | 2024-03-20T21:30:21Z | 16 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-20T20:29:44Z | ---
{}
---
# GGUF quants for [**Nexusflow/Starling-LM-7B-beta**](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
m4faisal/NLI-Lora-Fine-Tuning-10K-ALBERTA | m4faisal | 2024-03-20T21:29:21Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:adapter:albert/albert-base-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-20T21:11:40Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: albert/albert-base-v2
model-index:
- name: NLI-Lora-Fine-Tuning-10K-ALBERTA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLI-Lora-Fine-Tuning-10K-ALBERTA
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8439
- Accuracy: 0.6063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 1.0562 | 0.4551 |
| 1.0762 | 2.0 | 624 | 1.0236 | 0.4995 |
| 1.0762 | 3.0 | 936 | 0.9603 | 0.5361 |
| 1.0075 | 4.0 | 1248 | 0.9053 | 0.5671 |
| 0.9178 | 5.0 | 1560 | 0.8796 | 0.5823 |
| 0.9178 | 6.0 | 1872 | 0.8649 | 0.5934 |
| 0.8859 | 7.0 | 2184 | 0.8551 | 0.5977 |
| 0.8859 | 8.0 | 2496 | 0.8488 | 0.6033 |
| 0.8632 | 9.0 | 2808 | 0.8450 | 0.6057 |
| 0.8543 | 10.0 | 3120 | 0.8439 | 0.6063 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
ymgong/distil_train_token_classification_2 | ymgong | 2024-03-20T21:22:16Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-20T21:13:11Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distil_train_token_classification_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil_train_token_classification_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7646
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.5429 | 1.0 | 3681 | 0.5346 | 0.0 | 0.0 | 0.0 | 0.7848 |
| 0.4387 | 2.0 | 7362 | 0.5033 | 0.0 | 0.0 | 0.0 | 0.8034 |
| 0.3288 | 3.0 | 11043 | 0.4983 | 0.0 | 0.0 | 0.0 | 0.8101 |
| 0.2436 | 4.0 | 14724 | 0.5736 | 0.0 | 0.0 | 0.0 | 0.8086 |
| 0.1677 | 5.0 | 18405 | 0.6681 | 0.0 | 0.0 | 0.0 | 0.8107 |
| 0.1162 | 6.0 | 22086 | 0.7646 | 0.0 | 0.0 | 0.0 | 0.8112 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
biunlp/LongMt5-HeSum | biunlp | 2024-03-20T21:14:25Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:agemagician/mlong-t5-tglobal-base",
"base_model:finetune:agemagician/mlong-t5-tglobal-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-20T20:59:14Z | ---
license: apache-2.0
base_model: agemagician/mlong-t5-tglobal-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mlong-t5-tglobal-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlong-t5-tglobal-base
This model is a fine-tuned version of [agemagician/mlong-t5-tglobal-base](https://huggingface.co/agemagician/mlong-t5-tglobal-base) on an HeSum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1091
- Rouge1: 31.6099
- Rouge2: 12.9182
- Rougel: 23.8053
- Rougelsum: 25.5362
- Gen Len: 59.758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | RougeL | RougeLSum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 500 | 2.2709 | 20.5043 | 8.1518 | 16.9526 | 17.5001 |
| 2.8714 | 2.0 | 1000 | 2.2022 | 21.4051 | 8.7445 | 17.7534 | 18.3191 |
| 2.8714 | 3.0 | 1500 | 2.1608 | 21.6609 | 9.1753 | 18.0374 | 18.6176 |
| 2.5137 | 4.0 | 2000 | 2.1555 | 21.6818 | 9.1814 | 18.0382 | 18.6198 |
| 2.5137 | 5.0 | 2500 | 2.1462 | 21.9708 | 9.2033 | 18.3919 | 18.9535 |
| 2.3717 | 6.0 | 3000 | 2.1258 | 22.0583 | 9.2987 | 18.4379 | 19.0322 |
| 2.3717 | 7.0 | 3500 | 2.1278 | 21.8245 | 9.0474 | 18.1979 | 18.8038 |
| 2.2633 | 8.0 | 4000 | 2.1207 | 21.6273 | 8.8847 | 18.024 | 18.6049 |
| 2.2633 | 9.0 | 4500 | 2.1180 | 22.2004 | 9.6253 | 18.6373 | 19.1721 |
| 2.1886 | 10.0 | 5000 | 2.1220 | 22.1619 | 9.6206 | 18.5069 | 19.0856 |
| 2.1886 | 11.0 | 5500 | 2.1161 | 22.1518 | 9.4522 | 18.4695 | 19.0552 |
| 2.1144 | 12.0 | 6000 | 2.1103 | 22.0395 | 9.4185 | 18.4314 | 19.0305 |
| 2.1144 | 13.0 | 6500 | 2.1150 | 22.2404 | 9.4722 | 18.5482 | 19.1747 |
| 2.054 | 14.0 | 7000 | 2.1091 | 22.1466 | 9.3434 | 18.3443 | 18.9233 |
| 2.0526 | 15.0 | 8000 | 2.1580 | 30.4149 | 2.0774 | 22.9493 | 24.4478 |
| 2.1236 | 16.0 | 16000 | 2.1621 | 31.3101 | 13.3237 | 23.8249 | 25.526 |
| 2.0776 | 17.0 | 24000 | 2.1607 | 30.9902 | 12.3753 | 23.0243 | 24.8308 |
| 1.9843 | 18.0 | 32000 | 2.1553 | 32.0603 | 13.4985 | 24.0775 | 25.9692 |
### Framework versions
- Transformers 4.38.2
- Pytorch 1.13.1+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Barreto-G/Taxi-v3 | Barreto-G | 2024-03-20T21:09:10Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-20T21:09:06Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Barreto-G/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
suryaR-15/twitter-sentiment-extraction-distilbert | suryaR-15 | 2024-03-20T21:06:53Z | 119 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-21T09:59:38Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: twitter-sentiment-extraction-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-sentiment-extraction-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [Tweet Sentiment Extraction](https://www.kaggle.com/competitions/tweet-sentiment-extraction) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3328
- Accuracy: 0.8903
- F1: 0.8903
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
naivoder/ppo-Huggy | naivoder | 2024-03-20T21:02:19Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-03-20T21:01:39Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: naivoder/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
thrunlab/Mistral_Sparse_refined_web_90p_2024-03-20 | thrunlab | 2024-03-20T20:55:11Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"sparse_mistral",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-03-20T20:15:21Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: Mistral_Sparse_refined_web_90p_2024-03-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral_Sparse_refined_web_90p_2024-03-20
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Prathamesh25/Llama-2-7b-finetune-university | Prathamesh25 | 2024-03-20T20:51:27Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-03-20T20:43:48Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
MVRL/GeoSynth-OSM | MVRL | 2024-03-20T20:51:18Z | 26 | 0 | diffusers | [
"diffusers",
"safetensors",
"controlnet",
"stable-diffusion",
"satellite-imagery",
"OSM",
"image-to-image",
"arxiv:2302.05543",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:apache-2.0",
"region:us"
] | image-to-image | 2024-03-17T17:46:52Z | ---
library_name: diffusers
base_model: stabilityai/stable-diffusion-2-1-base
license: apache-2.0
widget:
- src: osm_tile_18_42048_101323.jpeg
prompt: Satellite image features a city neighborhood
tags:
- controlnet
- stable-diffusion
- satellite-imagery
- OSM
pipeline_tag: image-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a ControlNet based model that synthesizes satellite images given OpenStreetMap Images. The base stable diffusion model used is [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) (v2-1_512-ema-pruned.ckpt).
* Use it with 🧨 [diffusers](#examples)
* Use it with [controlnet](https://github.com/lllyasviel/ControlNet/tree/main?tab=readme-ov-file) repository
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [stable-diffusion](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
- **Paper:** [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543)
## Examples
```python
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
import torch
from PIL import Image
img = Image.open("osm_tile_18_42048_101323.jpeg")
controlnet = ControlNetModel.from_pretrained("MVRL/GeoSynth-OSM")
pipe = StableDiffusionControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base", controlnet=controlnet)
pipe = pipe.to("cuda:0")
# generate image
generator = torch.manual_seed(10345340)
image = pipe(
"Satellite image features a city neighborhood",
generator=generator,
image=img,
).images[0]
image.save("generated_city.jpg")
```
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sharren/vit-augmentation | sharren | 2024-03-20T20:47:25Z | 192 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-20T20:15:14Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-augmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-augmentation
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the skin-cancer dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4287
- Accuracy: 0.8592
- Precision: 0.8580
- Recall: 0.8592
- F1: 0.8574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 770
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9124 | 1.0 | 321 | 0.6025 | 0.7805 | 0.7788 | 0.7805 | 0.7683 |
| 0.5876 | 2.0 | 642 | 0.5819 | 0.7864 | 0.7990 | 0.7864 | 0.7820 |
| 0.5415 | 3.0 | 963 | 0.6149 | 0.8041 | 0.7943 | 0.8041 | 0.7865 |
| 0.4815 | 4.0 | 1284 | 0.4654 | 0.8294 | 0.8259 | 0.8294 | 0.8115 |
| 0.4263 | 5.0 | 1605 | 0.5481 | 0.8259 | 0.8315 | 0.8259 | 0.8023 |
| 0.3515 | 6.0 | 1926 | 0.4287 | 0.8592 | 0.8580 | 0.8592 | 0.8574 |
| 0.3144 | 7.0 | 2247 | 0.5005 | 0.8363 | 0.8320 | 0.8363 | 0.8270 |
| 0.2736 | 8.0 | 2568 | 0.5306 | 0.8294 | 0.8448 | 0.8294 | 0.8302 |
| 0.2519 | 9.0 | 2889 | 0.4733 | 0.8578 | 0.8534 | 0.8578 | 0.8534 |
| 0.2227 | 10.0 | 3210 | 0.4905 | 0.8585 | 0.8520 | 0.8585 | 0.8512 |
| 0.1724 | 11.0 | 3531 | 0.5050 | 0.8655 | 0.8671 | 0.8655 | 0.8628 |
| 0.1596 | 12.0 | 3852 | 0.5263 | 0.8686 | 0.8657 | 0.8686 | 0.8631 |
| 0.1397 | 13.0 | 4173 | 0.7043 | 0.8533 | 0.8703 | 0.8533 | 0.8488 |
| 0.1298 | 14.0 | 4494 | 0.6275 | 0.8679 | 0.8734 | 0.8679 | 0.8632 |
| 0.1029 | 15.0 | 4815 | 0.5564 | 0.8807 | 0.8776 | 0.8807 | 0.8772 |
| 0.0893 | 16.0 | 5136 | 0.5668 | 0.8804 | 0.8823 | 0.8804 | 0.8789 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
deepsynthbody/deepfake_gi_fastGAN | deepsynthbody | 2024-03-20T20:41:23Z | 62 | 2 | transformers | [
"transformers",
"pytorch",
"unconditional-image-generation",
"en",
"arxiv:2101.04775",
"license:mit",
"endpoints_compatible",
"region:us"
] | unconditional-image-generation | 2024-02-26T10:29:39Z | ---
license: mit
language:
- en
pipeline_tag: unconditional-image-generation
---
# This is an officila repository to generate some sample results from the FastGAN model presented in the paper: SinGAN-Seg: Synthetic training data generation for medical image segmentation [here](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0267976)
# A Fast and Stable GAN for Small and High Resolution Imagesets - pytorch
The official pytorch implementation of the FAT-GAN paper "Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis", the paper can be found [here](https://arxiv.org/abs/2101.04775).
```python
python generate_4ch_from_huggingface.py
``` |
ranchomacho/finally-mistral | ranchomacho | 2024-03-20T20:39:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-03-19T21:31:44Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
ingeol/facets_ep3_1234 | ingeol | 2024-03-20T20:38:36Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-20T20:37:53Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ingeol/facets_ep3_1234
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ingeol/facets_ep3_1234')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ingeol/facets_ep3_1234')
model = AutoModel.from_pretrained('ingeol/facets_ep3_1234')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/facets_ep3_1234)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3899 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.bpr_loss.BPRLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 7000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
sarak7/H4_320_769_v4 | sarak7 | 2024-03-20T20:36:45Z | 182 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T20:35:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sagravela/poca-SoccerTwos | sagravela | 2024-03-20T20:26:03Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-03-20T20:21:27Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sagravela/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
llmware/slim-tags-3b | llmware | 2024-03-20T20:14:29Z | 251 | 4 | transformers | [
"transformers",
"pytorch",
"stablelm_epoch",
"text-generation",
"custom_code",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-03-16T08:26:50Z | ---
license: cc-by-sa-4.0
inference: false
---
# SLIM-TAGS-3B
<!-- Provide a quick summary of what the model is/does. -->
**slim-tags-3b** is a small, specialized function-calling model fine-tuned to extract and generate meaningful tags from a chunk of text.
Tags generally correspond to named entities, but will also include key objects, entities and phrases that contribute meaningfully to the semantic meaning of the text.
The model is invoked as a specialized 'tags' classifier function that outputs a python dictionary in the form of:
`{'tags': ['NASDAQ', 'S&P', 'Dow', 'Verizon', 'Netflix, ... ']}`
with the value items in the list generally being extracted from the source text.
The intended use of the model is to auto-generate tags to text that can be used to enhance search retrieval, categorization, or to extract named entities that can be used programmatically in follow-up queries or prompts. It can also be used for fact-checking as a secondary validation on a longer (separate) LLM output.
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
Each slim model has a 'quantized tool' version, e.g., [**'slim-tags-3b-tool'**](https://huggingface.co/llmware/slim-tags-3b-tool).
## Prompt format:
`function = "classify"`
`params = "tags"`
`prompt = "<human> " + {text} + "\n" + `
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-tags-3b")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-tags-3b")
function = "classify"
params = "tags"
text = "Citibank announced a reduction in its targets for economic growth in France and the UK last week in light of ongoing concerns about inflation and unemployment, especially in large employers such as Airbus."
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-tags-3b")
response = slim_model.function_call(text,params=["tags"], function="classify")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h) |
SpideyDLK/wav2vec2-large-xls-r-300m-sinhala-low-LR-constant | SpideyDLK | 2024-03-20T20:08:47Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-20T08:28:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MatthieuJ/ING_2003M3_SLERP | MatthieuJ | 2024-03-20T20:03:45Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"chihoonlee10/T3Q-DPO-Mistral-7B",
"MatthieuJ/ING_2003M2_SLERP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T19:58:54Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- chihoonlee10/T3Q-DPO-Mistral-7B
- MatthieuJ/ING_2003M2_SLERP
---
# ING_2003M3_SLERP
ING_2003M3_SLERP is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [chihoonlee10/T3Q-DPO-Mistral-7B](https://huggingface.co/chihoonlee10/T3Q-DPO-Mistral-7B)
* [MatthieuJ/ING_2003M2_SLERP](https://huggingface.co/MatthieuJ/ING_2003M2_SLERP)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: chihoonlee10/T3Q-DPO-Mistral-7B
layer_range: [0, 32]
- model: MatthieuJ/ING_2003M2_SLERP
layer_range: [0, 32]
merge_method: slerp
base_model: MatthieuJ/ING_2003M2_SLERP
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
``` |
zrvicc/ppo-Pyramids | zrvicc | 2024-03-20T19:58:48Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2024-03-20T19:58:45Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zrvicc/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sarak7/H15_320_769_v1 | sarak7 | 2024-03-20T19:57:29Z | 182 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T19:55:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
llmware/slim-xsum | llmware | 2024-03-20T19:51:15Z | 143 | 6 | transformers | [
"transformers",
"pytorch",
"stablelm_epoch",
"text-generation",
"custom_code",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-03-02T12:55:11Z | ---
license: cc-by-sa-4.0
inference: false
---
# SLIM-XSUM
<!-- Provide a quick summary of what the model is/does. -->
**slim-xsum** implements an 'extreme summarization' function as a function-call on a decoder-based LLM, which generates as output a python dictionary with the form of:
`{'xsum': ['This is a short text summary or headline.']}`
The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs, providing an intuitive, flexible natural language response, without complex prompting, and with improved generalization and ability to fine-tune to a specific domain use case.
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
Each slim model has a 'quantized tool' version, e.g., [**'slim-xsum-tool'**](https://huggingface.co/llmware/slim-xsum-tool).
## Prompt format:
`function = "classify"`
`params = "xsum"`
`prompt = "<human> " + {text} + "\n" + `
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-xsum")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-xsum")
function = "classify"
params = "xsum"
text = "DeepMind, the UK-based AI lab owned by Google’s parent company Alphabet, has developed an AI system called AlphaGeometry that can solve complex geometry problems close to human Olympiad gold medalists. In a new paper in Nature, DeepMind revealed that AlphaGeometry was able to solve 25 out of 30 benchmark geometry problems from past International Mathematical Olympiad (IMO) competitions within the standard time limits. This nearly matches the average score of 26 problems solved by human gold medalists on the same tests. The AI system combines a neural language model with a rule-bound deduction engine, providing a synergy that enables the system to find solutions to complex geometry theorems. AlphaGeometry took a revolutionary approach to synthetic data generation by creating one billion random diagrams of geometric objects and deriving relationships between points and lines in each diagram. This process – termed “symbolic deduction and traceback” – resulted in a final training dataset of 100 million unique examples, providing a rich source for training the AI system."
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-xsum")
response = slim_model.function_call(text,params=["xsum"], function="classify")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h) |
automerger/YamshadowT3q-7B | automerger | 2024-03-20T19:49:19Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:finetune:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T19:48:28Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
---
# YamshadowT3q-7B
YamshadowT3q-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
## 🧩 Configuration
```yaml
models:
- model: automerger/YamShadow-7B
# No parameters necessary for base model
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: automerger/YamShadow-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/YamshadowT3q-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
JapGuy/XindlX_Acoustic | JapGuy | 2024-03-20T19:48:15Z | 0 | 0 | null | [
"music",
"rvc",
"xindlx",
"xindl",
"ondřej",
"ládek",
"model",
"audio-to-audio",
"cs",
"license:openrail",
"region:us"
] | audio-to-audio | 2024-03-20T19:40:19Z | ---
license: openrail
language:
- cs
pipeline_tag: audio-to-audio
tags:
- music
- rvc
- xindlx
- xindl
- ondřej
- ládek
- model
---

# Xindl X [CZ] (Acoustic/Unpluggiat Mix)
# 645 Epochs - RVC V2 - rmvpe
Trained on 1 hour 49 minutes 14 seconds of isolated acapellas using UVR (Voc FT + Reverb HQ)
and Audacity to remove parts with double vocals and vocals from others (+Noise Gate) |
Subsets and Splits