Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-29 12:26:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 401
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-29 12:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
thangla01/9002e85b-0fb1-48c3-b35a-cd261d39b45d | thangla01 | "2025-01-17T04:31:28Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-17T03:59:58Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9002e85b-0fb1-48c3-b35a-cd261d39b45d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2a15fece1296f155_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a15fece1296f155_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/9002e85b-0fb1-48c3-b35a-cd261d39b45d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2a15fece1296f155_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d6b3993-0224-4b37-9bf4-a7ac3a121fa2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d6b3993-0224-4b37-9bf4-a7ac3a121fa2
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9002e85b-0fb1-48c3-b35a-cd261d39b45d
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.185 | 0.0225 | 200 | 0.2405 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
benitoals/mt5-lora | benitoals | "2025-03-16T19:22:55Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:adapter:google/mt5-small",
"license:apache-2.0",
"region:us"
] | null | "2025-03-16T15:09:19Z" | ---
library_name: peft
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-lora
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3226
- Rouge1: 7.8216
- Rouge2: 1.0545
- Rougel: 6.1432
- Rougelsum: 6.1446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 23.4174 | 0.0160 | 5 | 11.5276 | 0.5512 | 0.0509 | 0.5083 | 0.5105 |
| 20.8541 | 0.0319 | 10 | 11.3887 | 0.5208 | 0.0420 | 0.4838 | 0.4806 |
| 19.7415 | 0.0479 | 15 | 11.2833 | 0.5674 | 0.0379 | 0.5039 | 0.5039 |
| 22.3202 | 0.0639 | 20 | 11.1524 | 0.6424 | 0.0488 | 0.5638 | 0.5599 |
| 19.664 | 0.0799 | 25 | 11.0473 | 0.6588 | 0.0387 | 0.5976 | 0.5910 |
| 19.8018 | 0.0958 | 30 | 11.0452 | 0.6368 | 0.0306 | 0.5833 | 0.5773 |
| 19.3572 | 0.1118 | 35 | 11.0515 | 0.7488 | 0.0299 | 0.6835 | 0.6747 |
| 20.8368 | 0.1278 | 40 | 11.0434 | 0.8070 | 0.0621 | 0.7082 | 0.7025 |
| 22.0695 | 0.1438 | 45 | 10.9605 | 0.7058 | 0.0380 | 0.6236 | 0.6165 |
| 20.6973 | 0.1597 | 50 | 10.9030 | 0.7250 | 0.0346 | 0.6444 | 0.6367 |
| 20.2663 | 0.1757 | 55 | 10.7690 | 0.8067 | 0.0531 | 0.7106 | 0.7034 |
| 22.5977 | 0.1917 | 60 | 10.6166 | 0.8258 | 0.0646 | 0.7409 | 0.7427 |
| 17.239 | 0.2077 | 65 | 10.4542 | 0.8013 | 0.0736 | 0.7240 | 0.7238 |
| 19.9696 | 0.2236 | 70 | 10.2183 | 0.8306 | 0.0737 | 0.7532 | 0.7495 |
| 15.8901 | 0.2396 | 75 | 10.0682 | 0.8198 | 0.0683 | 0.7315 | 0.7295 |
| 15.3968 | 0.2556 | 80 | 9.9389 | 0.8226 | 0.0519 | 0.7350 | 0.7343 |
| 21.1744 | 0.2716 | 85 | 9.8182 | 0.8582 | 0.0829 | 0.7680 | 0.7633 |
| 19.8718 | 0.2875 | 90 | 9.7245 | 0.8604 | 0.0792 | 0.7585 | 0.7592 |
| 16.4366 | 0.3035 | 95 | 9.6151 | 0.8401 | 0.0741 | 0.7469 | 0.7479 |
| 17.3532 | 0.3195 | 100 | 9.5204 | 0.8644 | 0.0742 | 0.7623 | 0.7632 |
| 17.0363 | 0.3355 | 105 | 9.4045 | 0.9015 | 0.0792 | 0.8096 | 0.8049 |
| 18.5648 | 0.3514 | 110 | 9.2755 | 0.9620 | 0.0864 | 0.8438 | 0.8439 |
| 16.616 | 0.3674 | 115 | 9.1327 | 0.9881 | 0.0735 | 0.8607 | 0.8568 |
| 14.493 | 0.3834 | 120 | 8.9794 | 0.9963 | 0.0696 | 0.8583 | 0.8565 |
| 14.9013 | 0.3994 | 125 | 8.8291 | 1.0745 | 0.0856 | 0.9557 | 0.9567 |
| 13.4762 | 0.4153 | 130 | 8.6996 | 1.0927 | 0.1052 | 0.9579 | 0.9509 |
| 14.4945 | 0.4313 | 135 | 8.5736 | 1.1429 | 0.1006 | 1.0144 | 1.0141 |
| 13.5679 | 0.4473 | 140 | 8.4425 | 1.1295 | 0.0912 | 0.9998 | 0.9994 |
| 15.0019 | 0.4633 | 145 | 8.2860 | 1.1883 | 0.0962 | 1.0700 | 1.0662 |
| 14.1212 | 0.4792 | 150 | 8.1218 | 1.3312 | 0.1006 | 1.1960 | 1.1942 |
| 12.7002 | 0.4952 | 155 | 7.9539 | 1.4626 | 0.1199 | 1.2780 | 1.2770 |
| 13.1474 | 0.5112 | 160 | 7.7965 | 1.3940 | 0.1190 | 1.2474 | 1.2436 |
| 14.5109 | 0.5272 | 165 | 7.6643 | 1.6103 | 0.1058 | 1.4796 | 1.4798 |
| 12.6913 | 0.5431 | 170 | 7.5470 | 1.7523 | 0.1230 | 1.5815 | 1.5751 |
| 11.4756 | 0.5591 | 175 | 7.4656 | 1.9158 | 0.1608 | 1.7088 | 1.7119 |
| 11.9234 | 0.5751 | 180 | 7.3834 | 1.9540 | 0.1774 | 1.7611 | 1.7598 |
| 11.0724 | 0.5911 | 185 | 7.2629 | 1.9871 | 0.1552 | 1.7926 | 1.7887 |
| 11.6507 | 0.6070 | 190 | 7.1034 | 1.9782 | 0.1410 | 1.7966 | 1.7902 |
| 11.9906 | 0.6230 | 195 | 6.9759 | 2.2038 | 0.1563 | 1.9806 | 1.9726 |
| 10.8749 | 0.6390 | 200 | 6.8845 | 2.2981 | 0.1679 | 2.0372 | 2.0295 |
| 10.6337 | 0.6550 | 205 | 6.8046 | 2.3401 | 0.1510 | 2.0842 | 2.0849 |
| 10.4566 | 0.6709 | 210 | 6.7251 | 2.4901 | 0.1521 | 2.2305 | 2.2257 |
| 10.3598 | 0.6869 | 215 | 6.6457 | 2.7227 | 0.1648 | 2.4467 | 2.4454 |
| 9.4165 | 0.7029 | 220 | 6.5906 | 2.8052 | 0.1248 | 2.5173 | 2.5198 |
| 9.4989 | 0.7188 | 225 | 6.5359 | 3.1011 | 0.1626 | 2.8119 | 2.8119 |
| 9.1349 | 0.7348 | 230 | 6.4783 | 3.2991 | 0.2224 | 2.9851 | 2.9873 |
| 9.7813 | 0.7508 | 235 | 6.4346 | 3.2898 | 0.1813 | 2.9987 | 3.0018 |
| 8.7862 | 0.7668 | 240 | 6.3906 | 3.2366 | 0.1717 | 2.9376 | 2.9315 |
| 8.5808 | 0.7827 | 245 | 6.3494 | 3.4441 | 0.1838 | 3.1485 | 3.1404 |
| 8.3688 | 0.7987 | 250 | 6.3160 | 3.4652 | 0.2120 | 3.0932 | 3.0858 |
| 7.8728 | 0.8147 | 255 | 6.1939 | 3.5126 | 0.2161 | 3.1197 | 3.1101 |
| 7.7791 | 0.8307 | 260 | 6.0422 | 3.5680 | 0.2148 | 3.2478 | 3.2502 |
| 7.6755 | 0.8466 | 265 | 5.9109 | 3.4946 | 0.2339 | 3.1731 | 3.1703 |
| 7.4162 | 0.8626 | 270 | 5.8052 | 3.5776 | 0.2309 | 3.2148 | 3.2215 |
| 7.2342 | 0.8786 | 275 | 5.7194 | 3.3818 | 0.1998 | 3.0493 | 3.0476 |
| 7.1504 | 0.8946 | 280 | 5.6598 | 3.1740 | 0.1684 | 2.8844 | 2.8834 |
| 7.0159 | 0.9105 | 285 | 5.6208 | 2.9711 | 0.1415 | 2.7126 | 2.7104 |
| 7.1424 | 0.9265 | 290 | 5.5964 | 2.7683 | 0.0634 | 2.5813 | 2.5837 |
| 6.7451 | 0.9425 | 295 | 5.5818 | 2.7456 | 0.0568 | 2.5982 | 2.5947 |
| 6.7401 | 0.9585 | 300 | 5.5601 | 2.6545 | 0.0280 | 2.5506 | 2.5347 |
| 6.7725 | 0.9744 | 305 | 5.5474 | 2.6375 | 0.0225 | 2.5254 | 2.5062 |
| 6.6 | 0.9904 | 310 | 5.5262 | 2.6241 | 0.0318 | 2.5260 | 2.5160 |
| 6.7553 | 1.0064 | 315 | 5.5075 | 2.7581 | 0.0752 | 2.6420 | 2.6236 |
| 6.7118 | 1.0224 | 320 | 5.4911 | 2.8034 | 0.0704 | 2.6557 | 2.6419 |
| 6.5138 | 1.0383 | 325 | 5.4664 | 2.8934 | 0.0729 | 2.7440 | 2.7363 |
| 6.3071 | 1.0543 | 330 | 5.4398 | 2.9828 | 0.0718 | 2.8381 | 2.8348 |
| 6.2783 | 1.0703 | 335 | 5.4021 | 3.0171 | 0.0941 | 2.8712 | 2.8618 |
| 6.1266 | 1.0863 | 340 | 5.3589 | 3.1202 | 0.1625 | 2.9845 | 2.9727 |
| 6.1881 | 1.1022 | 345 | 5.3167 | 3.3116 | 0.2156 | 3.1289 | 3.1196 |
| 6.2642 | 1.1182 | 350 | 5.2739 | 3.4642 | 0.2852 | 3.2263 | 3.2161 |
| 6.146 | 1.1342 | 355 | 5.2336 | 3.5524 | 0.2905 | 3.2935 | 3.2832 |
| 6.1432 | 1.1502 | 360 | 5.1949 | 3.6599 | 0.2973 | 3.4063 | 3.3878 |
| 5.9649 | 1.1661 | 365 | 5.1552 | 3.7014 | 0.3259 | 3.4428 | 3.4269 |
| 6.2828 | 1.1821 | 370 | 5.1219 | 3.8202 | 0.3492 | 3.5719 | 3.5633 |
| 5.9083 | 1.1981 | 375 | 5.0893 | 4.0190 | 0.3812 | 3.7140 | 3.7099 |
| 5.9194 | 1.2141 | 380 | 5.0534 | 4.2639 | 0.4082 | 3.9475 | 3.9430 |
| 5.8114 | 1.2300 | 385 | 5.0187 | 4.4058 | 0.4498 | 4.0691 | 4.0584 |
| 5.9099 | 1.2460 | 390 | 4.9859 | 4.6188 | 0.4248 | 4.2003 | 4.1816 |
| 6.9239 | 1.2620 | 395 | 4.9597 | 4.6864 | 0.3851 | 4.2822 | 4.2767 |
| 5.8784 | 1.2780 | 400 | 4.9367 | 4.7774 | 0.4615 | 4.3299 | 4.3268 |
| 5.6944 | 1.2939 | 405 | 4.9154 | 4.7518 | 0.4225 | 4.2943 | 4.2835 |
| 5.7841 | 1.3099 | 410 | 4.8942 | 5.0763 | 0.5001 | 4.5626 | 4.5506 |
| 5.4959 | 1.3259 | 415 | 4.8727 | 5.1974 | 0.4848 | 4.6204 | 4.6136 |
| 5.7918 | 1.3419 | 420 | 4.8514 | 5.3658 | 0.5150 | 4.7269 | 4.7229 |
| 5.564 | 1.3578 | 425 | 4.8315 | 5.4525 | 0.5188 | 4.7995 | 4.7949 |
| 5.7502 | 1.3738 | 430 | 4.8120 | 5.5823 | 0.5460 | 4.8396 | 4.8402 |
| 5.7183 | 1.3898 | 435 | 4.7962 | 5.6605 | 0.5128 | 4.9291 | 4.9355 |
| 5.6075 | 1.4058 | 440 | 4.7827 | 5.8504 | 0.5399 | 5.0957 | 5.0945 |
| 5.4517 | 1.4217 | 445 | 4.7698 | 5.9596 | 0.5882 | 5.0729 | 5.0818 |
| 5.5904 | 1.4377 | 450 | 4.7567 | 6.3422 | 0.6041 | 5.3449 | 5.3427 |
| 5.5302 | 1.4537 | 455 | 4.7425 | 6.5357 | 0.6688 | 5.4570 | 5.4494 |
| 5.4474 | 1.4696 | 460 | 4.7274 | 6.6389 | 0.6350 | 5.5574 | 5.5418 |
| 5.5081 | 1.4856 | 465 | 4.7142 | 6.7687 | 0.7230 | 5.6597 | 5.6488 |
| 5.7388 | 1.5016 | 470 | 4.7033 | 6.7877 | 0.6756 | 5.6965 | 5.6844 |
| 5.7518 | 1.5176 | 475 | 4.6940 | 6.7742 | 0.6820 | 5.6947 | 5.6833 |
| 5.9692 | 1.5335 | 480 | 4.6832 | 6.7753 | 0.6728 | 5.7376 | 5.7326 |
| 5.5493 | 1.5495 | 485 | 4.6752 | 6.7782 | 0.6779 | 5.7264 | 5.7236 |
| 5.4517 | 1.5655 | 490 | 4.6660 | 6.7433 | 0.6677 | 5.6678 | 5.6627 |
| 5.4072 | 1.5815 | 495 | 4.6544 | 6.7451 | 0.6685 | 5.6435 | 5.6459 |
| 5.3341 | 1.5974 | 500 | 4.6415 | 6.7146 | 0.6591 | 5.6348 | 5.6298 |
| 5.5638 | 1.6134 | 505 | 4.6278 | 6.9514 | 0.7494 | 5.7740 | 5.7623 |
| 5.4438 | 1.6294 | 510 | 4.6193 | 6.9315 | 0.7450 | 5.8160 | 5.8072 |
| 5.2225 | 1.6454 | 515 | 4.6121 | 7.0183 | 0.7255 | 5.8563 | 5.8436 |
| 5.3059 | 1.6613 | 520 | 4.6048 | 7.0723 | 0.7431 | 5.9316 | 5.9195 |
| 5.249 | 1.6773 | 525 | 4.5995 | 7.1713 | 0.7601 | 6.0056 | 5.9905 |
| 5.4208 | 1.6933 | 530 | 4.5945 | 7.2201 | 0.7667 | 5.9700 | 5.9527 |
| 5.8052 | 1.7093 | 535 | 4.5898 | 7.2075 | 0.7749 | 5.9898 | 5.9838 |
| 5.5609 | 1.7252 | 540 | 4.5833 | 7.2178 | 0.8101 | 5.9825 | 5.9742 |
| 5.3695 | 1.7412 | 545 | 4.5759 | 7.1889 | 0.8046 | 5.9291 | 5.9202 |
| 5.3855 | 1.7572 | 550 | 4.5688 | 7.2416 | 0.8298 | 6.0218 | 6.0193 |
| 5.2254 | 1.7732 | 555 | 4.5620 | 7.1364 | 0.7926 | 5.9165 | 5.9173 |
| 5.2946 | 1.7891 | 560 | 4.5562 | 7.1300 | 0.8436 | 5.8809 | 5.8734 |
| 5.1469 | 1.8051 | 565 | 4.5523 | 7.0630 | 0.8550 | 5.8875 | 5.8791 |
| 5.4316 | 1.8211 | 570 | 4.5476 | 7.1515 | 0.8185 | 5.8968 | 5.9001 |
| 5.4154 | 1.8371 | 575 | 4.5434 | 7.0949 | 0.8109 | 5.8768 | 5.8783 |
| 5.3236 | 1.8530 | 580 | 4.5393 | 7.0200 | 0.8220 | 5.8357 | 5.8337 |
| 5.3977 | 1.8690 | 585 | 4.5353 | 6.8813 | 0.7518 | 5.7547 | 5.7480 |
| 5.231 | 1.8850 | 590 | 4.5306 | 6.9975 | 0.8231 | 5.8376 | 5.8361 |
| 5.1977 | 1.9010 | 595 | 4.5253 | 6.9867 | 0.7840 | 5.7865 | 5.7865 |
| 5.1508 | 1.9169 | 600 | 4.5183 | 6.9963 | 0.7654 | 5.7866 | 5.7769 |
| 5.3122 | 1.9329 | 605 | 4.5135 | 7.0253 | 0.7531 | 5.8269 | 5.8236 |
| 5.2965 | 1.9489 | 610 | 4.5090 | 7.1005 | 0.7956 | 5.8740 | 5.8678 |
| 5.2763 | 1.9649 | 615 | 4.5049 | 7.2127 | 0.8061 | 5.9695 | 5.9634 |
| 5.197 | 1.9808 | 620 | 4.5003 | 7.2466 | 0.8646 | 5.9926 | 5.9920 |
| 5.06 | 1.9968 | 625 | 4.4960 | 7.1826 | 0.8737 | 5.9518 | 5.9560 |
| 5.197 | 2.0128 | 630 | 4.4950 | 7.1843 | 0.8755 | 5.9126 | 5.9132 |
| 5.1263 | 2.0288 | 635 | 4.4935 | 7.2493 | 0.8757 | 5.9768 | 5.9753 |
| 5.2239 | 2.0447 | 640 | 4.4904 | 7.1220 | 0.8488 | 5.8750 | 5.8761 |
| 5.4354 | 2.0607 | 645 | 4.4874 | 7.1138 | 0.8524 | 5.8486 | 5.8486 |
| 5.8528 | 2.0767 | 650 | 4.4851 | 7.1801 | 0.8384 | 5.9110 | 5.9093 |
| 5.1073 | 2.0927 | 655 | 4.4841 | 7.1465 | 0.8200 | 5.8586 | 5.8598 |
| 5.2021 | 2.1086 | 660 | 4.4827 | 7.1738 | 0.8648 | 5.9139 | 5.9107 |
| 5.2228 | 2.1246 | 665 | 4.4818 | 7.1615 | 0.8452 | 5.8823 | 5.8806 |
| 5.1512 | 2.1406 | 670 | 4.4792 | 7.2016 | 0.8799 | 5.9256 | 5.9243 |
| 5.0959 | 2.1565 | 675 | 4.4749 | 7.2563 | 0.8906 | 5.9145 | 5.9190 |
| 5.0816 | 2.1725 | 680 | 4.4706 | 7.2635 | 0.9198 | 5.9403 | 5.9434 |
| 5.2325 | 2.1885 | 685 | 4.4672 | 7.2539 | 0.8930 | 5.9365 | 5.9373 |
| 5.2439 | 2.2045 | 690 | 4.4647 | 7.1329 | 0.8859 | 5.8597 | 5.8618 |
| 5.3669 | 2.2204 | 695 | 4.4639 | 7.1435 | 0.8830 | 5.8719 | 5.8697 |
| 5.1739 | 2.2364 | 700 | 4.4615 | 7.1627 | 0.8747 | 5.9314 | 5.9293 |
| 5.1589 | 2.2524 | 705 | 4.4581 | 7.1976 | 0.8329 | 5.9189 | 5.9181 |
| 5.0201 | 2.2684 | 710 | 4.4548 | 7.1503 | 0.8330 | 5.8295 | 5.8288 |
| 5.1782 | 2.2843 | 715 | 4.4524 | 7.0478 | 0.8211 | 5.7872 | 5.7877 |
| 5.4161 | 2.3003 | 720 | 4.4512 | 7.0227 | 0.8004 | 5.7695 | 5.7741 |
| 5.2066 | 2.3163 | 725 | 4.4483 | 7.0058 | 0.8090 | 5.7595 | 5.7571 |
| 5.2428 | 2.3323 | 730 | 4.4456 | 6.9623 | 0.8100 | 5.7154 | 5.7128 |
| 5.2263 | 2.3482 | 735 | 4.4429 | 7.0183 | 0.8002 | 5.7599 | 5.7542 |
| 5.1332 | 2.3642 | 740 | 4.4402 | 7.1283 | 0.8214 | 5.8456 | 5.8473 |
| 5.2223 | 2.3802 | 745 | 4.4370 | 7.1333 | 0.8331 | 5.8713 | 5.8781 |
| 5.0942 | 2.3962 | 750 | 4.4358 | 7.2237 | 0.8484 | 5.9213 | 5.9230 |
| 5.1686 | 2.4121 | 755 | 4.4354 | 7.2068 | 0.8469 | 5.9262 | 5.9210 |
| 5.1731 | 2.4281 | 760 | 4.4340 | 7.2619 | 0.8603 | 5.9468 | 5.9409 |
| 5.1303 | 2.4441 | 765 | 4.4309 | 7.2012 | 0.8455 | 5.8986 | 5.8969 |
| 4.9487 | 2.4601 | 770 | 4.4286 | 7.1008 | 0.8057 | 5.8042 | 5.8038 |
| 5.0781 | 2.4760 | 775 | 4.4271 | 7.0506 | 0.7662 | 5.7615 | 5.7577 |
| 5.1239 | 2.4920 | 780 | 4.4255 | 7.0936 | 0.8019 | 5.8001 | 5.7966 |
| 5.0973 | 2.5080 | 785 | 4.4233 | 7.1104 | 0.8471 | 5.8333 | 5.8377 |
| 5.047 | 2.5240 | 790 | 4.4226 | 7.1011 | 0.8421 | 5.8017 | 5.8025 |
| 5.1145 | 2.5399 | 795 | 4.4214 | 7.1758 | 0.8452 | 5.8921 | 5.8885 |
| 5.1569 | 2.5559 | 800 | 4.4199 | 7.2287 | 0.8429 | 5.9401 | 5.9358 |
| 5.0929 | 2.5719 | 805 | 4.4186 | 7.2605 | 0.8465 | 5.9684 | 5.9706 |
| 4.9979 | 2.5879 | 810 | 4.4167 | 7.2952 | 0.8723 | 5.9564 | 5.9586 |
| 5.2416 | 2.6038 | 815 | 4.4148 | 7.3345 | 0.8723 | 5.9941 | 5.9916 |
| 5.1275 | 2.6198 | 820 | 4.4122 | 7.3157 | 0.8731 | 5.9902 | 5.9799 |
| 5.0442 | 2.6358 | 825 | 4.4109 | 7.3031 | 0.8777 | 5.9888 | 5.9800 |
| 5.02 | 2.6518 | 830 | 4.4111 | 7.3503 | 0.8800 | 6.0046 | 5.9926 |
| 5.0734 | 2.6677 | 835 | 4.4122 | 7.3183 | 0.8784 | 5.9362 | 5.9262 |
| 5.078 | 2.6837 | 840 | 4.4128 | 7.3182 | 0.8817 | 5.9506 | 5.9443 |
| 4.9815 | 2.6997 | 845 | 4.4116 | 7.2872 | 0.9068 | 5.9426 | 5.9339 |
| 4.9768 | 2.7157 | 850 | 4.4089 | 7.3339 | 0.8839 | 6.0006 | 5.9917 |
| 5.032 | 2.7316 | 855 | 4.4048 | 7.3529 | 0.8972 | 6.0608 | 6.0543 |
| 5.025 | 2.7476 | 860 | 4.4027 | 7.3729 | 0.9448 | 6.0690 | 6.0629 |
| 5.0341 | 2.7636 | 865 | 4.4008 | 7.2963 | 0.9054 | 6.0141 | 6.0132 |
| 4.9091 | 2.7796 | 870 | 4.4007 | 7.2958 | 0.8849 | 6.0324 | 6.0364 |
| 5.0662 | 2.7955 | 875 | 4.4016 | 7.3117 | 0.8884 | 6.0305 | 6.0244 |
| 5.2129 | 2.8115 | 880 | 4.4022 | 7.2543 | 0.8930 | 5.9937 | 5.9829 |
| 5.1673 | 2.8275 | 885 | 4.4010 | 7.3243 | 0.8953 | 6.0491 | 6.0441 |
| 5.0533 | 2.8435 | 890 | 4.3979 | 7.2927 | 0.8748 | 6.0153 | 6.0086 |
| 5.0917 | 2.8594 | 895 | 4.3944 | 7.3091 | 0.9064 | 6.0178 | 6.0094 |
| 5.2621 | 2.8754 | 900 | 4.3935 | 7.2308 | 0.8968 | 5.9607 | 5.9525 |
| 4.9642 | 2.8914 | 905 | 4.3933 | 7.2551 | 0.8896 | 5.9743 | 5.9692 |
| 5.013 | 2.9073 | 910 | 4.3922 | 7.2437 | 0.8894 | 5.9721 | 5.9652 |
| 5.0455 | 2.9233 | 915 | 4.3906 | 7.2870 | 0.9207 | 5.9824 | 5.9747 |
| 5.1566 | 2.9393 | 920 | 4.3894 | 7.3241 | 0.9130 | 6.0086 | 5.9984 |
| 5.1624 | 2.9553 | 925 | 4.3892 | 7.3109 | 0.9082 | 5.9725 | 5.9633 |
| 4.9393 | 2.9712 | 930 | 4.3880 | 7.3367 | 0.9046 | 5.9947 | 5.9904 |
| 5.0442 | 2.9872 | 935 | 4.3868 | 7.3301 | 0.8972 | 5.9785 | 5.9684 |
| 5.1003 | 3.0032 | 940 | 4.3850 | 7.2628 | 0.8845 | 5.9464 | 5.9399 |
| 5.0953 | 3.0192 | 945 | 4.3836 | 7.2418 | 0.8976 | 5.9284 | 5.9251 |
| 5.0498 | 3.0351 | 950 | 4.3827 | 7.2159 | 0.8875 | 5.8910 | 5.8869 |
| 4.9049 | 3.0511 | 955 | 4.3810 | 7.2497 | 0.9309 | 5.9163 | 5.9116 |
| 5.1671 | 3.0671 | 960 | 4.3788 | 7.2706 | 0.9297 | 5.9437 | 5.9386 |
| 5.039 | 3.0831 | 965 | 4.3772 | 7.2784 | 0.9169 | 5.9150 | 5.9039 |
| 4.9631 | 3.0990 | 970 | 4.3764 | 7.2486 | 0.9009 | 5.8804 | 5.8762 |
| 5.0452 | 3.1150 | 975 | 4.3759 | 7.2401 | 0.8922 | 5.8692 | 5.8657 |
| 5.0414 | 3.1310 | 980 | 4.3750 | 7.2768 | 0.8755 | 5.9131 | 5.9067 |
| 5.0543 | 3.1470 | 985 | 4.3746 | 7.3163 | 0.9281 | 5.9206 | 5.9190 |
| 5.0062 | 3.1629 | 990 | 4.3747 | 7.4349 | 0.9639 | 6.0304 | 6.0215 |
| 5.0441 | 3.1789 | 995 | 4.3762 | 7.4030 | 0.9699 | 6.0210 | 6.0136 |
| 5.0549 | 3.1949 | 1000 | 4.3763 | 7.4348 | 0.9795 | 6.0078 | 6.0003 |
| 4.8066 | 3.2109 | 1005 | 4.3762 | 7.3923 | 0.9642 | 5.9953 | 5.9818 |
| 4.9398 | 3.2268 | 1010 | 4.3762 | 7.3906 | 0.9577 | 5.9968 | 5.9852 |
| 4.9251 | 3.2428 | 1015 | 4.3758 | 7.4300 | 0.9411 | 6.0545 | 6.0482 |
| 4.9915 | 3.2588 | 1020 | 4.3758 | 7.4863 | 0.9828 | 6.0976 | 6.0841 |
| 5.0957 | 3.2748 | 1025 | 4.3752 | 7.4985 | 0.9803 | 6.1357 | 6.1216 |
| 5.1146 | 3.2907 | 1030 | 4.3740 | 7.5046 | 0.9575 | 6.1345 | 6.1203 |
| 5.1074 | 3.3067 | 1035 | 4.3727 | 7.5027 | 0.9469 | 6.0929 | 6.0845 |
| 4.88 | 3.3227 | 1040 | 4.3711 | 7.4661 | 0.9356 | 6.0458 | 6.0364 |
| 4.922 | 3.3387 | 1045 | 4.3693 | 7.4812 | 0.9686 | 6.0729 | 6.0572 |
| 5.0247 | 3.3546 | 1050 | 4.3667 | 7.4387 | 0.9287 | 6.0286 | 6.0156 |
| 4.9925 | 3.3706 | 1055 | 4.3645 | 7.4062 | 0.9259 | 5.9774 | 5.9694 |
| 5.0598 | 3.3866 | 1060 | 4.3631 | 7.4506 | 0.9642 | 5.9859 | 5.9719 |
| 5.1107 | 3.4026 | 1065 | 4.3620 | 7.4497 | 0.9831 | 5.9838 | 5.9676 |
| 5.9375 | 3.4185 | 1070 | 4.3621 | 7.4267 | 0.9827 | 5.9516 | 5.9419 |
| 5.0654 | 3.4345 | 1075 | 4.3630 | 7.4204 | 0.9749 | 5.9540 | 5.9443 |
| 5.0002 | 3.4505 | 1080 | 4.3650 | 7.4044 | 0.9337 | 5.9603 | 5.9516 |
| 4.9958 | 3.4665 | 1085 | 4.3672 | 7.4358 | 0.9545 | 5.9941 | 5.9836 |
| 4.9601 | 3.4824 | 1090 | 4.3681 | 7.4650 | 0.9612 | 6.0129 | 6.0008 |
| 5.0984 | 3.4984 | 1095 | 4.3687 | 7.3706 | 0.9345 | 5.9709 | 5.9548 |
| 5.0266 | 3.5144 | 1100 | 4.3684 | 7.3913 | 0.9382 | 5.9680 | 5.9598 |
| 4.8365 | 3.5304 | 1105 | 4.3686 | 7.3994 | 0.9509 | 5.9680 | 5.9572 |
| 4.8761 | 3.5463 | 1110 | 4.3675 | 7.4688 | 0.9509 | 5.9876 | 5.9789 |
| 4.9711 | 3.5623 | 1115 | 4.3662 | 7.4356 | 0.9500 | 5.9815 | 5.9745 |
| 5.027 | 3.5783 | 1120 | 4.3660 | 7.4089 | 0.9309 | 5.9851 | 5.9710 |
| 4.8545 | 3.5942 | 1125 | 4.3662 | 7.4501 | 0.9371 | 6.0094 | 5.9983 |
| 4.8711 | 3.6102 | 1130 | 4.3666 | 7.4912 | 0.9437 | 6.0361 | 6.0276 |
| 4.9593 | 3.6262 | 1135 | 4.3661 | 7.5101 | 0.9449 | 6.0638 | 6.0620 |
| 5.0499 | 3.6422 | 1140 | 4.3654 | 7.5779 | 0.9641 | 6.0859 | 6.0758 |
| 5.1807 | 3.6581 | 1145 | 4.3647 | 7.6076 | 0.9619 | 6.0981 | 6.0904 |
| 4.9862 | 3.6741 | 1150 | 4.3630 | 7.6010 | 0.9717 | 6.0660 | 6.0576 |
| 4.8606 | 3.6901 | 1155 | 4.3617 | 7.5719 | 0.9763 | 6.0416 | 6.0323 |
| 5.1017 | 3.7061 | 1160 | 4.3609 | 7.5500 | 0.9730 | 6.0361 | 6.0249 |
| 5.145 | 3.7220 | 1165 | 4.3596 | 7.5079 | 0.9484 | 5.9961 | 5.9835 |
| 5.0378 | 3.7380 | 1170 | 4.3590 | 7.4708 | 0.9639 | 5.9879 | 5.9769 |
| 5.1457 | 3.7540 | 1175 | 4.3584 | 7.4506 | 0.9670 | 5.9667 | 5.9589 |
| 4.8238 | 3.7700 | 1180 | 4.3572 | 7.5186 | 0.9703 | 5.9893 | 5.9760 |
| 5.0649 | 3.7859 | 1185 | 4.3552 | 7.5171 | 0.9449 | 5.9562 | 5.9513 |
| 5.2019 | 3.8019 | 1190 | 4.3540 | 7.5922 | 0.9901 | 6.0043 | 5.9924 |
| 4.9544 | 3.8179 | 1195 | 4.3534 | 7.6158 | 0.9749 | 5.9965 | 5.9857 |
| 5.0737 | 3.8339 | 1200 | 4.3523 | 7.6583 | 0.9888 | 6.0326 | 6.0217 |
| 5.0164 | 3.8498 | 1205 | 4.3516 | 7.6724 | 0.9962 | 6.0160 | 6.0052 |
| 5.0842 | 3.8658 | 1210 | 4.3509 | 7.5698 | 0.9960 | 5.9851 | 5.9752 |
| 4.8723 | 3.8818 | 1215 | 4.3501 | 7.5539 | 0.9918 | 5.9883 | 5.9771 |
| 4.9591 | 3.8978 | 1220 | 4.3497 | 7.4666 | 0.9714 | 5.9291 | 5.9206 |
| 4.9407 | 3.9137 | 1225 | 4.3491 | 7.4850 | 0.9716 | 5.9482 | 5.9378 |
| 4.9529 | 3.9297 | 1230 | 4.3488 | 7.4749 | 0.9759 | 5.9510 | 5.9453 |
| 4.7896 | 3.9457 | 1235 | 4.3486 | 7.4730 | 0.9514 | 5.9567 | 5.9470 |
| 4.9939 | 3.9617 | 1240 | 4.3479 | 7.4929 | 0.9907 | 5.9980 | 5.9886 |
| 4.9954 | 3.9776 | 1245 | 4.3468 | 7.5651 | 1.0185 | 6.0080 | 6.0022 |
| 5.0677 | 3.9936 | 1250 | 4.3460 | 7.5604 | 1.0443 | 6.0228 | 6.0100 |
| 4.8667 | 4.0096 | 1255 | 4.3460 | 7.5730 | 1.0380 | 6.0162 | 6.0022 |
| 4.9784 | 4.0256 | 1260 | 4.3454 | 7.5657 | 1.0282 | 5.9908 | 5.9860 |
| 4.8794 | 4.0415 | 1265 | 4.3447 | 7.5704 | 1.0411 | 5.9863 | 5.9772 |
| 4.9753 | 4.0575 | 1270 | 4.3439 | 7.5492 | 0.9916 | 5.9362 | 5.9300 |
| 4.8115 | 4.0735 | 1275 | 4.3434 | 7.5454 | 0.9888 | 5.9454 | 5.9433 |
| 4.9679 | 4.0895 | 1280 | 4.3421 | 7.5914 | 1.0105 | 5.9620 | 5.9625 |
| 4.9535 | 4.1054 | 1285 | 4.3412 | 7.6417 | 1.0420 | 5.9848 | 5.9852 |
| 5.0465 | 4.1214 | 1290 | 4.3410 | 7.6500 | 1.0408 | 5.9770 | 5.9797 |
| 4.9678 | 4.1374 | 1295 | 4.3407 | 7.6556 | 1.0283 | 5.9699 | 5.9734 |
| 4.8975 | 4.1534 | 1300 | 4.3408 | 7.6201 | 1.0222 | 5.9561 | 5.9589 |
| 5.073 | 4.1693 | 1305 | 4.3405 | 7.6351 | 1.0144 | 5.9511 | 5.9503 |
| 5.0291 | 4.1853 | 1310 | 4.3398 | 7.6253 | 1.0094 | 5.9875 | 5.9866 |
| 4.7808 | 4.2013 | 1315 | 4.3404 | 7.6265 | 1.0080 | 5.9924 | 5.9872 |
| 5.0118 | 4.2173 | 1320 | 4.3406 | 7.6036 | 1.0131 | 5.9951 | 5.9944 |
| 4.9147 | 4.2332 | 1325 | 4.3408 | 7.6478 | 1.0060 | 5.9993 | 5.9963 |
| 5.2196 | 4.2492 | 1330 | 4.3409 | 7.6920 | 1.0183 | 6.0359 | 6.0368 |
| 4.7923 | 4.2652 | 1335 | 4.3402 | 7.7291 | 1.0206 | 6.0770 | 6.0775 |
| 5.2416 | 4.2812 | 1340 | 4.3389 | 7.7189 | 1.0336 | 6.0734 | 6.0690 |
| 4.9129 | 4.2971 | 1345 | 4.3371 | 7.7187 | 1.0641 | 6.0790 | 6.0741 |
| 4.8426 | 4.3131 | 1350 | 4.3360 | 7.7236 | 1.0619 | 6.0650 | 6.0600 |
| 4.9097 | 4.3291 | 1355 | 4.3350 | 7.7200 | 1.0503 | 6.0460 | 6.0423 |
| 4.812 | 4.3450 | 1360 | 4.3344 | 7.7144 | 1.0565 | 6.0526 | 6.0517 |
| 5.01 | 4.3610 | 1365 | 4.3338 | 7.7116 | 1.0388 | 6.0731 | 6.0710 |
| 4.8906 | 4.3770 | 1370 | 4.3326 | 7.7511 | 1.0446 | 6.0964 | 6.0925 |
| 4.9873 | 4.3930 | 1375 | 4.3326 | 7.7392 | 1.0728 | 6.0972 | 6.0944 |
| 4.8922 | 4.4089 | 1380 | 4.3326 | 7.6930 | 1.0453 | 6.0528 | 6.0544 |
| 5.0074 | 4.4249 | 1385 | 4.3323 | 7.6723 | 1.0509 | 6.0698 | 6.0709 |
| 4.9939 | 4.4409 | 1390 | 4.3318 | 7.6901 | 1.0595 | 6.0942 | 6.0909 |
| 5.023 | 4.4569 | 1395 | 4.3318 | 7.7283 | 1.0458 | 6.1093 | 6.1050 |
| 4.8076 | 4.4728 | 1400 | 4.3317 | 7.6976 | 1.0719 | 6.0941 | 6.0859 |
| 5.1418 | 4.4888 | 1405 | 4.3321 | 7.7327 | 1.0791 | 6.0838 | 6.0748 |
| 4.8614 | 4.5048 | 1410 | 4.3321 | 7.5965 | 1.0139 | 6.0089 | 6.0025 |
| 4.8516 | 4.5208 | 1415 | 4.3322 | 7.6741 | 1.0467 | 6.0436 | 6.0375 |
| 5.1611 | 4.5367 | 1420 | 4.3331 | 7.6540 | 1.0297 | 6.0385 | 6.0298 |
| 4.8864 | 4.5527 | 1425 | 4.3332 | 7.6279 | 1.0335 | 6.0205 | 6.0106 |
| 5.0181 | 4.5687 | 1430 | 4.3328 | 7.6939 | 1.0702 | 6.0759 | 6.0756 |
| 5.1197 | 4.5847 | 1435 | 4.3319 | 7.7408 | 1.0965 | 6.1116 | 6.1082 |
| 5.03 | 4.6006 | 1440 | 4.3315 | 7.7497 | 1.0758 | 6.1203 | 6.1149 |
| 4.8272 | 4.6166 | 1445 | 4.3318 | 7.7247 | 1.0643 | 6.1007 | 6.0937 |
| 4.8669 | 4.6326 | 1450 | 4.3319 | 7.7556 | 1.0908 | 6.1252 | 6.1205 |
| 4.9243 | 4.6486 | 1455 | 4.3316 | 7.7401 | 1.0920 | 6.1078 | 6.1052 |
| 4.9354 | 4.6645 | 1460 | 4.3311 | 7.7599 | 1.0710 | 6.1272 | 6.1201 |
| 4.9087 | 4.6805 | 1465 | 4.3312 | 7.7357 | 1.0748 | 6.1100 | 6.1019 |
| 5.0466 | 4.6965 | 1470 | 4.3313 | 7.7241 | 1.0440 | 6.0772 | 6.0712 |
| 4.987 | 4.7125 | 1475 | 4.3311 | 7.6894 | 1.0217 | 6.0663 | 6.0653 |
| 5.0424 | 4.7284 | 1480 | 4.3312 | 7.7028 | 1.0257 | 6.0607 | 6.0603 |
| 4.9104 | 4.7444 | 1485 | 4.3314 | 7.7049 | 1.0382 | 6.0579 | 6.0566 |
| 5.4987 | 4.7604 | 1490 | 4.3316 | 7.7122 | 1.0360 | 6.0474 | 6.0392 |
| 4.9413 | 4.7764 | 1495 | 4.3316 | 7.7498 | 1.0150 | 6.0863 | 6.0798 |
| 4.9124 | 4.7923 | 1500 | 4.3321 | 7.7538 | 1.0291 | 6.0791 | 6.0753 |
| 4.8853 | 4.8083 | 1505 | 4.3328 | 7.7532 | 1.0170 | 6.0938 | 6.0888 |
| 4.9113 | 4.8243 | 1510 | 4.3325 | 7.7311 | 1.0010 | 6.0658 | 6.0632 |
| 5.1313 | 4.8403 | 1515 | 4.3322 | 7.7341 | 0.9924 | 6.0552 | 6.0545 |
| 4.9934 | 4.8562 | 1520 | 4.3313 | 7.7445 | 1.0359 | 6.0700 | 6.0630 |
| 4.9148 | 4.8722 | 1525 | 4.3303 | 7.7310 | 1.0531 | 6.0620 | 6.0601 |
| 4.9555 | 4.8882 | 1530 | 4.3295 | 7.7469 | 1.0705 | 6.0964 | 6.0910 |
| 4.7897 | 4.9042 | 1535 | 4.3290 | 7.7583 | 1.0816 | 6.0907 | 6.0861 |
| 5.2209 | 4.9201 | 1540 | 4.3291 | 7.6943 | 1.0308 | 6.0188 | 6.0216 |
| 5.1329 | 4.9361 | 1545 | 4.3293 | 7.7375 | 1.0413 | 6.0539 | 6.0509 |
| 4.9242 | 4.9521 | 1550 | 4.3295 | 7.7516 | 1.0480 | 6.0517 | 6.0544 |
| 4.8293 | 4.9681 | 1555 | 4.3290 | 7.7303 | 1.0457 | 6.0352 | 6.0299 |
| 4.9005 | 4.9840 | 1560 | 4.3285 | 7.7264 | 1.0352 | 6.0264 | 6.0187 |
| 4.9146 | 5.0 | 1565 | 4.3283 | 7.7195 | 1.0331 | 6.0371 | 6.0316 |
| 5.0743 | 5.0160 | 1570 | 4.3279 | 7.7183 | 1.0703 | 6.0462 | 6.0352 |
| 4.8564 | 5.0319 | 1575 | 4.3276 | 7.7294 | 1.0774 | 6.0553 | 6.0465 |
| 4.927 | 5.0479 | 1580 | 4.3274 | 7.7478 | 1.0915 | 6.0644 | 6.0530 |
| 4.7846 | 5.0639 | 1585 | 4.3270 | 7.7938 | 1.1112 | 6.1027 | 6.0903 |
| 5.0363 | 5.0799 | 1590 | 4.3270 | 7.8152 | 1.1145 | 6.1205 | 6.1103 |
| 4.8218 | 5.0958 | 1595 | 4.3269 | 7.7887 | 1.1122 | 6.1028 | 6.0912 |
| 4.9988 | 5.1118 | 1600 | 4.3264 | 7.7989 | 1.0866 | 6.1238 | 6.1106 |
| 5.0564 | 5.1278 | 1605 | 4.3260 | 7.7916 | 1.0738 | 6.1174 | 6.1035 |
| 4.9796 | 5.1438 | 1610 | 4.3261 | 7.7909 | 1.0695 | 6.1230 | 6.1167 |
| 4.9055 | 5.1597 | 1615 | 4.3263 | 7.7884 | 1.0694 | 6.1194 | 6.1151 |
| 5.0426 | 5.1757 | 1620 | 4.3266 | 7.8154 | 1.0780 | 6.1596 | 6.1528 |
| 4.8726 | 5.1917 | 1625 | 4.3268 | 7.8710 | 1.0926 | 6.2040 | 6.1981 |
| 4.9214 | 5.2077 | 1630 | 4.3268 | 7.8670 | 1.0880 | 6.2140 | 6.2059 |
| 4.7925 | 5.2236 | 1635 | 4.3270 | 7.8478 | 1.0920 | 6.1588 | 6.1562 |
| 4.8974 | 5.2396 | 1640 | 4.3270 | 7.8261 | 1.1046 | 6.1584 | 6.1575 |
| 4.942 | 5.2556 | 1645 | 4.3269 | 7.7903 | 1.0984 | 6.1120 | 6.1089 |
| 4.8275 | 5.2716 | 1650 | 4.3267 | 7.8096 | 1.0926 | 6.1156 | 6.1118 |
| 5.2443 | 5.2875 | 1655 | 4.3266 | 7.8066 | 1.0915 | 6.1225 | 6.1208 |
| 4.9792 | 5.3035 | 1660 | 4.3265 | 7.8421 | 1.0884 | 6.1384 | 6.1326 |
| 5.0446 | 5.3195 | 1665 | 4.3264 | 7.8426 | 1.0842 | 6.1409 | 6.1384 |
| 4.9313 | 5.3355 | 1670 | 4.3265 | 7.8642 | 1.0807 | 6.1659 | 6.1570 |
| 4.7981 | 5.3514 | 1675 | 4.3262 | 7.8241 | 1.0759 | 6.1698 | 6.1630 |
| 4.8056 | 5.3674 | 1680 | 4.3258 | 7.8492 | 1.0877 | 6.1975 | 6.1891 |
| 4.9503 | 5.3834 | 1685 | 4.3257 | 7.8797 | 1.0971 | 6.2128 | 6.2035 |
| 4.9289 | 5.3994 | 1690 | 4.3256 | 7.8831 | 1.1095 | 6.2021 | 6.1947 |
| 4.9398 | 5.4153 | 1695 | 4.3256 | 7.8577 | 1.1025 | 6.1788 | 6.1690 |
| 4.8135 | 5.4313 | 1700 | 4.3255 | 7.8440 | 1.0957 | 6.1659 | 6.1589 |
| 4.9993 | 5.4473 | 1705 | 4.3255 | 7.7802 | 1.0757 | 6.1289 | 6.1246 |
| 4.9389 | 5.4633 | 1710 | 4.3253 | 7.7995 | 1.0852 | 6.1398 | 6.1318 |
| 5.1666 | 5.4792 | 1715 | 4.3248 | 7.7675 | 1.0663 | 6.1302 | 6.1217 |
| 4.9146 | 5.4952 | 1720 | 4.3246 | 7.7542 | 1.0510 | 6.1118 | 6.1048 |
| 4.8464 | 5.5112 | 1725 | 4.3245 | 7.7685 | 1.0458 | 6.1273 | 6.1173 |
| 4.9564 | 5.5272 | 1730 | 4.3246 | 7.7701 | 1.0553 | 6.1174 | 6.1098 |
| 4.9375 | 5.5431 | 1735 | 4.3247 | 7.8109 | 1.0601 | 6.1367 | 6.1313 |
| 4.8133 | 5.5591 | 1740 | 4.3249 | 7.8305 | 1.0728 | 6.1654 | 6.1590 |
| 4.912 | 5.5751 | 1745 | 4.3251 | 7.8129 | 1.0629 | 6.1426 | 6.1365 |
| 4.8319 | 5.5911 | 1750 | 4.3251 | 7.8199 | 1.0698 | 6.1581 | 6.1561 |
| 4.9121 | 5.6070 | 1755 | 4.3251 | 7.8372 | 1.0657 | 6.1785 | 6.1737 |
| 4.9906 | 5.6230 | 1760 | 4.3249 | 7.8071 | 1.0540 | 6.1744 | 6.1683 |
| 4.954 | 5.6390 | 1765 | 4.3248 | 7.7878 | 1.0540 | 6.1538 | 6.1520 |
| 5.4461 | 5.6550 | 1770 | 4.3246 | 7.7997 | 1.0543 | 6.1591 | 6.1566 |
| 5.0082 | 5.6709 | 1775 | 4.3242 | 7.8102 | 1.0602 | 6.1712 | 6.1660 |
| 4.7876 | 5.6869 | 1780 | 4.3240 | 7.7923 | 1.0579 | 6.1640 | 6.1587 |
| 4.9822 | 5.7029 | 1785 | 4.3238 | 7.8041 | 1.0695 | 6.1665 | 6.1598 |
| 4.9743 | 5.7188 | 1790 | 4.3238 | 7.8227 | 1.0672 | 6.1718 | 6.1658 |
| 4.7794 | 5.7348 | 1795 | 4.3238 | 7.8236 | 1.0587 | 6.1678 | 6.1618 |
| 4.887 | 5.7508 | 1800 | 4.3238 | 7.8118 | 1.0493 | 6.1490 | 6.1430 |
| 4.9724 | 5.7668 | 1805 | 4.3237 | 7.8100 | 1.0490 | 6.1387 | 6.1352 |
| 4.9202 | 5.7827 | 1810 | 4.3234 | 7.8060 | 1.0534 | 6.1295 | 6.1289 |
| 4.9347 | 5.7987 | 1815 | 4.3232 | 7.8086 | 1.0454 | 6.1371 | 6.1370 |
| 4.8149 | 5.8147 | 1820 | 4.3230 | 7.8131 | 1.0503 | 6.1413 | 6.1411 |
| 4.9697 | 5.8307 | 1825 | 4.3230 | 7.8132 | 1.0503 | 6.1416 | 6.1414 |
| 4.9892 | 5.8466 | 1830 | 4.3229 | 7.8132 | 1.0503 | 6.1416 | 6.1414 |
| 4.8792 | 5.8626 | 1835 | 4.3229 | 7.8264 | 1.0503 | 6.1416 | 6.1413 |
| 4.929 | 5.8786 | 1840 | 4.3228 | 7.8286 | 1.0578 | 6.1372 | 6.1368 |
| 4.9375 | 5.8946 | 1845 | 4.3228 | 7.8286 | 1.0578 | 6.1383 | 6.1387 |
| 4.8564 | 5.9105 | 1850 | 4.3228 | 7.8423 | 1.0575 | 6.1429 | 6.1443 |
| 5.0988 | 5.9265 | 1855 | 4.3227 | 7.8423 | 1.0575 | 6.1429 | 6.1443 |
| 5.0853 | 5.9425 | 1860 | 4.3227 | 7.8217 | 1.0502 | 6.1385 | 6.1409 |
| 5.1103 | 5.9585 | 1865 | 4.3227 | 7.8216 | 1.0545 | 6.1432 | 6.1446 |
| 4.8901 | 5.9744 | 1870 | 4.3227 | 7.8216 | 1.0545 | 6.1432 | 6.1446 |
| 4.8509 | 5.9904 | 1875 | 4.3226 | 7.8216 | 1.0545 | 6.1432 | 6.1446 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0 |
Ritwik1998/masked-lm-tpu | Ritwik1998 | "2023-12-16T08:55:41Z" | 1 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-12-16T08:49:25Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: Ritwik1998/masked-lm-tpu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Ritwik1998/masked-lm-tpu
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.9915
- Train Accuracy: 0.0002
- Validation Loss: 9.9088
- Validation Accuracy: 0.0015
- Epoch: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.0001, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 22325, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1175, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 10.2835 | 0.0000 | 10.2748 | 0.0 | 0 |
| 10.2791 | 0.0000 | 10.2665 | 0.0000 | 1 |
| 10.2586 | 0.0 | 10.2434 | 0.0000 | 2 |
| 10.2455 | 0.0000 | 10.2100 | 0.0000 | 3 |
| 10.2063 | 0.0 | 10.1620 | 0.0000 | 4 |
| 10.1658 | 0.0 | 10.1147 | 0.0 | 5 |
| 10.1174 | 0.0 | 10.0558 | 0.0 | 6 |
| 10.0613 | 0.0000 | 9.9800 | 0.0002 | 7 |
| 9.9915 | 0.0002 | 9.9088 | 0.0015 | 8 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.12.0
- Tokenizers 0.15.0
|
zelk12/MT5-Gen5-GP-gemma-2-Av4dMTg2-9B | zelk12 | "2024-12-29T13:58:03Z" | 7 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:lemon07r/Gemma-2-Ataraxy-v4d-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v4d-9B",
"base_model:zelk12/MT-Gen2-gemma-2-9B",
"base_model:merge:zelk12/MT-Gen2-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-29T13:51:43Z" | ---
base_model:
- lemon07r/Gemma-2-Ataraxy-v4d-9B
- zelk12/MT-Gen2-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lemon07r/Gemma-2-Ataraxy-v4d-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4d-9B)
* [zelk12/MT-Gen2-gemma-2-9B](https://huggingface.co/zelk12/MT-Gen2-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: lemon07r/Gemma-2-Ataraxy-v4d-9B
- model: zelk12/MT-Gen2-gemma-2-9B
merge_method: slerp
base_model: lemon07r/Gemma-2-Ataraxy-v4d-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
lucashadi/el-estacada | lucashadi | "2024-02-09T19:38:04Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-02-09T17:41:12Z" | ---
license: mit
base_model: microsoft/MiniLM-L12-H384-uncased
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
prodm93/llama_7b_corr | prodm93 | "2023-04-20T19:40:15Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-20T19:21:36Z" | ---
license: other
duplicated_from: decapoda-research/llama-7b-hf
---
LLaMA-7B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA_ARM | SicariusSicariiStuff | "2024-10-11T18:18:18Z" | 80 | 3 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-11T17:23:44Z" | ---
license: apache-2.0
---
|
shasha168/ppo-super-LunarLander-v2 | shasha168 | "2023-10-09T21:21:40Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-09T21:21:21Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.73 +/- 18.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
IciVictor/Studio_Ghibli_Flux | IciVictor | "2025-03-27T09:33:35Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-27T09:33:22Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0g\0h\0i\0b\0l\0i\0s\0t\0y\0l\0e\0,\0 \0L\0i\0n\0k\0 \0f\0r\0o\0m\0 \0l\0e\0g\0e\0n\0d\0 \0o\0f\0 \0z\0e\0l\0d\0a\0 \0s\0i\0t\0t\0i\0n\0g\0 \0b\0y\0 \0c\0a\0m\0p\0f\0i\0r\0e\0 \0"
output:
url: images/EFY4KSEHR9GTWZ910YZS6K5Y90.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ghiblistyle
---
# ghiblistyle
<Gallery />
## Trigger words
You should use `ghiblistyle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/IciVictor/Studio_Ghibli_Flux/tree/main) them in the Files & versions tab.
|
Legalaz/llabo_07_13_22_22 | Legalaz | "2025-02-21T03:24:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-21T03:22:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KnutJaegersberg/Deita-32b | KnutJaegersberg | "2024-04-09T07:31:52Z" | 53 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-07T12:54:52Z" | ---
license: other
license_name: qwen
license_link: LICENSE
model-index:
- name: Deita-32b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.11
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-32b
name: Open LLM Leaderboard
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### User:
How do you fine tune a large language model?
### Assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deita-32b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.16|
|AI2 Reasoning Challenge (25-Shot)|63.05|
|HellaSwag (10-Shot) |84.67|
|MMLU (5-Shot) |73.95|
|TruthfulQA (0-shot) |58.11|
|Winogrande (5-shot) |80.82|
|GSM8k (5-shot) |72.33|
|
GleghornLab/asm35_red | GleghornLab | "2024-07-31T19:59:57Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"esm",
"fill-mask",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-06-24T01:56:13Z" | ---
library_name: transformers
license: gpl
---
Please see our paper and Github for more details
https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1
https://github.com/Gleghorn-Lab/AnnotationVocabulary |
Joviex/agm | Joviex | "2023-10-08T20:12:35Z" | 4 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"region:us"
] | text-to-image | "2023-10-08T01:25:57Z" |
---
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of alexander morano, morano
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
lfhe/task-2-deepseek-ai-DeepSeek-R1-Distill-Qwen-7B | lfhe | "2025-01-28T16:04:55Z" | 214 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"region:us"
] | null | "2025-01-28T15:59:07Z" | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
RichardErkhov/Undi95_-_UtopiaXL-13B-gguf | RichardErkhov | "2024-07-26T05:49:00Z" | 11 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-07-25T19:11:39Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
UtopiaXL-13B - GGUF
- Model creator: https://huggingface.co/Undi95/
- Original model: https://huggingface.co/Undi95/UtopiaXL-13B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [UtopiaXL-13B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q2_K.gguf) | Q2_K | 4.52GB |
| [UtopiaXL-13B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [UtopiaXL-13B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [UtopiaXL-13B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [UtopiaXL-13B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [UtopiaXL-13B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q3_K.gguf) | Q3_K | 5.9GB |
| [UtopiaXL-13B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [UtopiaXL-13B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [UtopiaXL-13B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [UtopiaXL-13B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q4_0.gguf) | Q4_0 | 6.86GB |
| [UtopiaXL-13B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [UtopiaXL-13B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [UtopiaXL-13B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q4_K.gguf) | Q4_K | 7.33GB |
| [UtopiaXL-13B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [UtopiaXL-13B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q4_1.gguf) | Q4_1 | 7.61GB |
| [UtopiaXL-13B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q5_0.gguf) | Q5_0 | 8.36GB |
| [UtopiaXL-13B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [UtopiaXL-13B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q5_K.gguf) | Q5_K | 8.6GB |
| [UtopiaXL-13B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [UtopiaXL-13B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q5_1.gguf) | Q5_1 | 9.1GB |
| [UtopiaXL-13B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q6_K.gguf) | Q6_K | 9.95GB |
| [UtopiaXL-13B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UtopiaXL-13B-gguf/blob/main/UtopiaXL-13B.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
<!-- description start -->
## Description
This repo contains fp16 files of UtopiaXL-13B, a merge I have done with the new [layer shuffle](https://github.com/cg123/mergekit/blob/main/mergekit/scripts/layershuffle.py) method from mergekit (thank you Charles for adding my request to your project!)
This is more a proof of concept showing the following:
- Llama2 is very flexible
- Llama2 don't care about what is finetuned on the layers specifically if you keep them in the same order
- Clean merge (no ties, no SLERP, etc...) with only layer is possible without breaking something
- Deleting special tokens/using model with special token don't break the model
- Alpaca win, always. So use it.
The name "XL" come from the absurd amount of model pushed into it.
<!-- description end -->
<!-- description start -->
## Models and loras used
- [Undi95/Utopia-13B](https://huggingface.co/Undi95/Utopia-13B)
- [KoboldAI/LLAMA2-13B-Holodeck-1](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1)
- [Undi95/PsyMedRP-v1-13B](https://huggingface.co/Undi95/PsyMedRP-v1-13B)
- [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
- [Heralax/Cat-0.5](https://huggingface.co/Heralax/Cat-0.5)
- [KoboldAI/LLaMA2-13B-TiefighterLR](https://huggingface.co/KoboldAI/LLaMA2-13B-TiefighterLR)
- [Heralax/Augmental-13b-two-epochs](https://huggingface.co/Heralax/Augmental-13b-two-epochs)
- [Undi95/Storytelling-v2.1-13B-lora](https://huggingface.co/Undi95/Storytelling-v2.1-13B-lora)
- [Undi95/LimaRP-UtopiaXL-13B-v3-lora](https://huggingface.co/Undi95/LimaRP-UtopiaXL-13B-v3-lora)
<!-- description end -->
## The sauce
```
!mergekit-layershuffle ./UtopiaXL \
--model Undi95/Utopia-13B --weight 0.4 \
--model KoboldAI/LLAMA2-13B-Holodeck-1 --weight 0.1 \
--model Undi95/PsyMedRP-v1-13B --weight 0.1 \
--model PygmalionAI/pygmalion-2-13b --weight 0.25 \
--model Heralax/Cat-0.5 --weight 0.1 \
--model KoboldAI/LLaMA2-13B-TiefighterLR --weight 0.1 \
--model Heralax/Augmental-13b-two-epochs --weight 0.1 \
--write-yaml UtopiaXL.yaml
=========================
merge_method: passthrough
slices:
- sources:
- layer_range:
- 0
- 1
model: KoboldAI/LLAMA2-13B-Holodeck-1
- sources:
- layer_range:
- 1
- 4
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 4
- 5
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 5
- 6
model: Undi95/PsyMedRP-v1-13B
- sources:
- layer_range:
- 6
- 7
model: KoboldAI/LLaMA2-13B-TiefighterLR
- sources:
- layer_range:
- 7
- 8
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 8
- 9
model: Undi95/PsyMedRP-v1-13B
- sources:
- layer_range:
- 9
- 10
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 10
- 13
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 13
- 14
model: Heralax/Cat-0.5
- sources:
- layer_range:
- 14
- 17
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 17
- 18
model: Heralax/Augmental-13b-two-epochs
- sources:
- layer_range:
- 18
- 19
model: KoboldAI/LLaMA2-13B-TiefighterLR
- sources:
- layer_range:
- 19
- 22
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 22
- 23
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 23
- 25
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 25
- 27
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 27
- 28
model: Heralax/Cat-0.5
- sources:
- layer_range:
- 28
- 30
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 30
- 31
model: Heralax/Augmental-13b-two-epochs
- sources:
- layer_range:
- 31
- 32
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 32
- 33
model: Heralax/Cat-0.5
- sources:
- layer_range:
- 33
- 34
model: Heralax/Augmental-13b-two-epochs
- sources:
- layer_range:
- 34
- 35
model: Undi95/PsyMedRP-v1-13B
- sources:
- layer_range:
- 35
- 36
model: Heralax/Augmental-13b-two-epochs
- sources:
- layer_range:
- 36
- 37
model: Undi95/Utopia-13B
- sources:
- layer_range:
- 37
- 38
model: KoboldAI/LLAMA2-13B-Holodeck-1
- sources:
- layer_range:
- 38
- 39
model: PygmalionAI/pygmalion-2-13b
- sources:
- layer_range:
- 39
- 40
model: KoboldAI/LLAMA2-13B-Holodeck-1
=========================
=> Applying Undi95/Storytelling-v2.1-13B-lora x 0.1
=> Trained on LimaRP for +2h
=> Applying Undi95/LimaRP-UtopiaXL-13B-v3-lora x 0.35
```
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
A big thanks to [Charles](https://huggingface.co/chargoddard) for adding the layer shuffle method to his tool [mergekit](https://github.com/cg123/mergekit/tree/main) and [Henky/KoboldAI](https://koboldai.org/) for the machine he let me use.
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
RichardErkhov/lemon-mint_-_gemma-7b-openhermes-v0.80-8bits | RichardErkhov | "2024-05-19T08:44:27Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-19T08:38:32Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-7b-openhermes-v0.80 - bnb 8bits
- Model creator: https://huggingface.co/lemon-mint/
- Original model: https://huggingface.co/lemon-mint/gemma-7b-openhermes-v0.80/
Original model description:
---
library_name: transformers
language:
- en
license: gemma
tags:
- gemma
- pytorch
- instruct
- finetune
base_model: google/gemma-1.1-7b-it
pipeline_tag: text-generation
datasets:
- teknium/OpenHermes-2.5
---
# Gemma 7B OpenHermes v0.80
- Eval Loss: `0.4544`
- Train Loss: `0.3129`
- lr: `5e-5`
- optimizer: adamw
- lr_scheduler_type: cosine
## Model Details
This is an instruction-following model finetuned from the Gemma 1.1 7B model. It was finetuned on the OpenHermes-2.5 dataset to improve its ability to engage in open-ended conversation and respond helpfully to user instructions and queries. The model can engage in dialogue, answer questions, and assist with a variety of tasks.
### Model Description
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [google/gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it)
# Limitations and Ethical Considerations
As Gemma 7B OpenHermes has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.
|
silviasapora/gemma-7b-silvia-basic-5e-5-05-vshp39 | silviasapora | "2025-02-28T11:45:41Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-28T09:05:37Z" | ---
base_model: google/gemma-7b
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia-basic-5e-5-05-vshp39", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/an7gy5ip)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Best000/70022994-696b-4d7a-b255-575b0be8eec3 | Best000 | "2025-01-31T06:23:47Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"region:us"
] | null | "2025-01-31T06:21:59Z" | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 70022994-696b-4d7a-b255-575b0be8eec3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 594acf1a1ccb4752_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/594acf1a1ccb4752_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/70022994-696b-4d7a-b255-575b0be8eec3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/594acf1a1ccb4752_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aabd8aec-07d3-4064-82eb-acdd95e34794
wandb_project: Birthday-SN56-32-Gradients-On-Demand
wandb_run: your_name
wandb_runid: aabd8aec-07d3-4064-82eb-acdd95e34794
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 70022994-696b-4d7a-b255-575b0be8eec3
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0023 | 1 | 0.9014 |
| 0.8668 | 0.0304 | 13 | 0.7448 |
| 0.7501 | 0.0607 | 26 | 0.5346 |
| 0.5737 | 0.0911 | 39 | 0.4999 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso09/c89bed54-d7d2-4c9f-acc2-be905e8b34bd | lesso09 | "2025-03-18T12:23:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | "2025-03-18T11:39:29Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c89bed54-d7d2-4c9f-acc2-be905e8b34bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- be178d6477c55bbd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be178d6477c55bbd_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso09/c89bed54-d7d2-4c9f-acc2-be905e8b34bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000209
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 2500
micro_batch_size: 4
mlflow_experiment_name: /tmp/be178d6477c55bbd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 90
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b637c3e2-7e36-46fa-94c0-1563c0136a02
wandb_project: 09a
wandb_run: your_name
wandb_runid: b637c3e2-7e36-46fa-94c0-1563c0136a02
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c89bed54-d7d2-4c9f-acc2-be905e8b34bd
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 12.3550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000209
- train_batch_size: 4
- eval_batch_size: 4
- seed: 90
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 12.4505 |
| 12.3693 | 0.0730 | 500 | 12.3631 |
| 12.3624 | 0.1460 | 1000 | 12.3567 |
| 12.3603 | 0.2189 | 1500 | 12.3556 |
| 12.3595 | 0.2919 | 2000 | 12.3550 |
| 12.3601 | 0.3649 | 2500 | 12.3550 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
emaeon/Complete-shuffle-8-lora | emaeon | "2024-01-11T04:35:31Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-10-23T07:38:34Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
strangerzonehf/Casual-Pencil-Pro | strangerzonehf | "2025-03-13T02:45:13Z" | 0 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2025-03-13T02:15:19Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'Casual Pencil, A black and white sketch of a woman sitting at a round table. The woman is wearing a hooded sweatshirt with a hood on her head and a black scarf around her neck. She is sitting on a stool with her legs crossed. There is a cup with a straw in it on the table in front of her. The table has a black base with four legs. The background is plain white.'
output:
url: images/1.png
- text: 'Casual Pencil, A black and white sketch of a man sitting at a round table. The man is wearing a short sleeve shirt and white pants. He is sitting on a stool with his legs crossed. The table is made of wood and has a black metal base. There is a glass on top of the table with a white cup on it. The background is a solid white color.'
output:
url: images/222.png
- text: 'Casual Pencil, A black and white sketch of a man and woman sitting at a table. The man is wearing a baseball cap, a long sleeve shirt and pants. He is holding a bowl in his hands. The woman is holding an object in her hand. She is looking at the man with a smile on her face. The table is in front of them with a few objects on it.'
output:
url: images/333.png
- text: 'Casual Pencil, A black and white drawing of a woman sitting at a round table. The woman is wearing a long skirt with a black belt around her waist. She is sitting on a stool with her left hand on her hip. Her right hand is resting on her head. Her mouth is slightly open and her eyes are closed. Her hair is tied up in a bun. The table has a glass of water on it. The background is a plain white.'
output:
url: images/444.png
- text: 'Casual Pencil – A black and white sketch of a young couple sitting across from each other at a small round café table. The man is wearing a hoodie with the sleeves rolled up, holding a fork in one hand and a sandwich in the other. The woman is wearing a long-sleeved blouse with a high collar and is sipping from a tall glass with a straw. Their expressions are relaxed as they gaze at each other. The background is left blank.'
output:
url: images/555.png
- text: 'Casual Pencil – A black and white sketch of a man sitting at a dimly lit bar table. He is wearing a leather jacket over a plain t-shirt, with his hands wrapped around a glass tumbler. His expression is contemplative as he looks down at the drink. A pack of cigarettes and a lighter sit on the table. The background is shadowy with a hint of a bar counter.'
output:
url: images/666.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Casual Pencil
license: creativeml-openrail-m
---

# Casual Pencil
<Gallery />
# Model description for Casual-Pencil-Pro
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 25 & 3400 |
| Epoch | 25 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 33
## Best Dimensions & Inference
| **Dimensions** | **Aspect Ratio** | **Recommendation** |
|-----------------|------------------|---------------------------|
| 1280 x 832 | 3:2 | Best |
| 1024 x 1024 | 1:1 | Default |
### Inference Range
- **Recommended Inference Steps:** 30–35
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "strangerzonehf/Casual-Pencil-Pro"
trigger_word = "Casual Pencil"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Casual Pencil` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/strangerzonehf/Casual-Pencil-Pro/tree/main) them in the Files & versions tab.
|
surya-narayanan/physics | surya-narayanan | "2024-06-24T02:06:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T04:02:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dayicnizgeldi/guncelgiris2025 | Dayicnizgeldi | "2025-02-20T17:04:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-20T17:04:09Z" | <a href="http://shortlinkapp.com/lKrPt">🔗 Casibom ⭐️ 2025 Güncel Giriş Adresi ⭐️ Özel Bonus Fırsatları</a>
<a href="http://shortlinkapp.com/lKrPt" title="Casibom Güncel Giriş Adresi">
<img src="https://i.ibb.co/gbBPst9F/photo-2025-02-11-01-55-32.jpg" alt="Casibom Güncel Giriş Adresi" style="max-width: 100%; border: 2px solid #ddd; border-radius: 10px;">
</a>
💰 Casibom ile 2025’te büyük kazanç fırsatlarını yakalayın!
🔥 Yüksek oranlar, devasa bonuslar ve kesintisiz oyun keyfi sizi bekliyor.
⚡️ Hemen giriş yapın, avantajları kaçırmayın!
<meta name="description" content="Casibom 2025 güncel giriş adresi ile en yüksek oranlar, casino oyunları ve bonus fırsatlarını yakalayın! Hızlı ve güvenilir erişim için tıklayın.">
|
Triangle104/Qwen2.5-3B-Instruct-Q5_K_M-GGUF | Triangle104 | "2024-09-19T16:38:53Z" | 10 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-09-19T16:38:42Z" | ---
base_model: Qwen/Qwen2.5-3B-Instruct
language:
- en
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-3B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-3b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-3b-instruct-q5_k_m.gguf -c 2048
```
|
blood34/2f103096-ed8a-4586-a772-2d477743727d | blood34 | "2025-02-04T22:40:01Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-04T22:29:26Z" | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2f103096-ed8a-4586-a772-2d477743727d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0fed324f73b2b18a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0fed324f73b2b18a_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: blood34/2f103096-ed8a-4586-a772-2d477743727d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/0fed324f73b2b18a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7a3157e1-e82d-4429-a3b1-82125dfe1990
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7a3157e1-e82d-4429-a3b1-82125dfe1990
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2f103096-ed8a-4586-a772-2d477743727d
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9008 | 0.0661 | 200 | 1.6237 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
glif-loradex-trainer/an303042_RisographPrint_v1 | glif-loradex-trainer | "2024-11-17T14:14:37Z" | 100 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | "2024-11-17T14:14:08Z" | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1731852711402__000003000_0.jpg
text: wounded centaur, mythical creature rsgrf , risograph print
- output:
url: samples/1731852736200__000003000_1.jpg
text: ruins of athens, snake rsgrf , risograph print
- output:
url: samples/1731852761091__000003000_2.jpg
text: silver vampire sword rsgrf , risograph print
- output:
url: samples/1731852785896__000003000_3.jpg
text: rsgrf , risograph print, poster of a dog and a fire hydrant
- output:
url: samples/1731852810709__000003000_4.jpg
text: rsgrf , risograph print, poster with the word 'glif' in colorful pastels
- output:
url: samples/1731852835625__000003000_5.jpg
text: rsgrf , risograph print, woman standing by a phone booth on a busy street
corner
base_model: black-forest-labs/FLUX.1-dev
trigger: rsgrf , risograph print
instance_prompt: rsgrf , risograph print
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# RisographPrint_v1
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `an303042`.
<Gallery />
## Trigger words
You should use `rsgrf , risograph print` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/an303042_RisographPrint_v1/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Jonjew/AlysonHannigan | Jonjew | "2025-03-06T03:17:46Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-03-06T03:17:09Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 4LYS0N
output:
url: images/ComfyUI_00016_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 4LYS0N
license: unknown
---
# Alyson Hannigan - Actress
<Gallery />
## Model description
FROM https://civitai.com/models/1180460/alyson-hannigan-actress?modelVersionId=1328416
Trigger 4LYS0N
Strength 1.2
Alyson Hannigan Denisof (@alysonhannigan)
Alyson Hannigan, born on March 24, 1974, in Washington, D.C., is an American actress and television presenter best known for her iconic roles in Buffy the Vampire Slayer (as Willow Rosenberg) and How I Met Your Mother (as Lily Aldrin). She also gained fame in the American Pie film series as Michelle Flaherty. Alyson began her acting career in commercials as a child and transitioned to television and film roles in her teens, with her first major film appearance in My Stepmother Is an Alien (1988).
Standing 5'4" (164 cm) tall and weighing approximately 132 lbs (60 kg), Alyson has hazel eyes, brown hair, and an approachable charm that enhances her performances. She is of Irish and Jewish descent. Alyson graduated from North Hollywood High School and later attended California State University, Northridge.
In addition to acting, she has worked as a television host, presenting the show Penn & Teller: Fool Us. She married her Buffy the Vampire Slayer co-star Alexis Denisof in 2003, and the couple has two daughters, Satyana and Keeva. Alyson remains a beloved figure in the entertainment industry for her comedic timing and endearing portrayals.
Recommended Strength is 1.0 to 1.2 (1.2 having better results):
## Trigger words
You should use `4LYS0N` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/AlysonHannigan/tree/main) them in the Files & versions tab.
|
John6666/titania-mix-realistic-pony-gbv90-sdxl | John6666 | "2024-08-01T04:38:56Z" | 153 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"cosplay",
"boobs",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-01T04:31:05Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- cosplay
- boobs
- pony
---
Original model is [here](https://civitai.com/models/349587/titaniamix-realistic-pony-sd15?modelVersionId=686689).
|
baxromovcomston/results | baxromovcomston | "2024-12-21T11:48:32Z" | 48 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-generation",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-21T11:48:13Z" | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.3
|
hungphongtrn/en_vi_envit5-translation_half_doc_news_train | hungphongtrn | "2024-04-05T04:23:53Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/envit5-translation",
"base_model:finetune:VietAI/envit5-translation",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-05T00:59:14Z" | ---
license: openrail
base_model: VietAI/envit5-translation
tags:
- generated_from_trainer
model-index:
- name: en_vi_envit5-translation_half_doc_news_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_vi_envit5-translation_half_doc_news_train
This model is a fine-tuned version of [VietAI/envit5-translation](https://huggingface.co/VietAI/envit5-translation) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.1
|
LoneStriker/SG-Raccoon-Yi-55B-2.65bpw-h6-exl2 | LoneStriker | "2023-12-01T00:02:03Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-30T23:50:20Z" | ---
language:
- en,
pipeline_tag: conversational
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/>
</p>
---
# SG Raccoon Yi 55B
The first 55B auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://huggingface.co/01-ai/Yi-34B) into one.
# Prompting Format
```
single-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>
multi-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>Hi!<|endoftext|>Human: How are you?\n\nAssistant: <|endoftext|>target2<|endoftext|>
```
# Merge process
The models used in the merge are [dolphin-2_2-yi-34b](https://huggingface.co/ehartford/dolphin-2_2-yi-34b) and [OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama).
The layer ranges used are as follows:
```yaml
- range 0, 16
OrionStar-Yi-34B-Chat
- range 8, 24
dolphin-2_2-yi-34b
- range 17, 32
OrionStar-Yi-34B-Chat
- range 25, 40
dolphin-2_2-yi-34b
- range 33, 48
OrionStar-Yi-34B-Chat
- range 41, 56
dolphin-2_2-yi-34b
- range 49, 64
OrionStar-Yi-34B-Chat
- range 57, 72
dolphin-2_2-yi-34b
- range 65, 80
OrionStar-Yi-34B-Chat
```
# Tips
Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default.
Sometimes the model "spells out" the stop token as </s> like Capybara, so you may need to add </s> as an additional stopping condition.
# Benchmarks
Coming soon.
# Acknowledgements
- Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project
- [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
- Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options
- Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models
- This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)
|
giodeleo/dummy-model | giodeleo | "2024-02-24T10:33:16Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-02-24T10:32:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hanzalwi/bloom-1b-finetuned-aings-validation-data-try | Hanzalwi | "2023-11-25T10:26:57Z" | 18 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"bloom",
"arxiv:1910.09700",
"base_model:bigscience/bloom-1b1",
"base_model:adapter:bigscience/bloom-1b1",
"region:us"
] | null | "2023-11-24T18:07:45Z" | ---
library_name: peft
base_model: bigscience/bloom-1b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
Hadihandrian22/vonny | Hadihandrian22 | "2023-07-27T14:33:26Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-07-27T14:33:26Z" | ---
license: creativeml-openrail-m
---
|
unfoldingpast/V-Express_dup | unfoldingpast | "2025-02-16T12:58:40Z" | 0 | 0 | diffusers | [
"diffusers",
"onnx",
"text-to-image",
"stable-diffusion",
"audio-to-video",
"en",
"arxiv:2406.02511",
"license:apache-2.0",
"region:us"
] | text-to-image | "2025-02-16T12:29:47Z" | ---
tags:
- text-to-image
- stable-diffusion
- audio-to-video
license: apache-2.0
language:
- en
library_name: diffusers
---
# V-Express Model Card
<div align="center">
[**Project Page**](https://tenvence.github.io/p/v-express/) **|** [**Paper**](https://arxiv.org/abs/2406.02511) **|** [**Code**](https://github.com/tencent-ailab/V-Express)
</div>
---
## Introduction
## Models
### Audio Encoder
- [model_ckpts/wav2vec2-base-960h](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/wav2vec2-base-960h). (It is also available from the original model card [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h))
### Face Analysis
- [model_ckpts/insightface_models/models/buffalo_l](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/insightface_models/models/buffalo_l). (It is also available from the original repository [insightface/buffalo_l](https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip))
### V-Express
- [model_ckpts/sd-vae-ft-mse](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/sd-vae-ft-mse). VAE encoder. (original model card [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse))
- [model_ckpts/stable-diffusion-v1-5](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/stable-diffusion-v1-5). Only the model configuration file for unet is needed here. (original model card [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5))
- [model_ckpts/v-express](https://huggingface.co/tk93/V-Express/tree/main/model_ckpts/v-express). The video generation model conditional on audio and V-kps we call V-Express.
- You should download and put all `.bin` model to `model_ckpts/v-express` directory, which includes `audio_projection.bin`, `denoising_unet.bin`, `motion_module.bin`, `reference_net.bin`, and `v_kps_guider.bin`.
|
enoriega/rule_learning_1mm_many_negatives_spanpred_margin_avg | enoriega | "2022-07-27T14:45:37Z" | 19 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:enoriega/odinsynth_dataset",
"endpoints_compatible",
"region:us"
] | null | "2022-07-26T04:40:02Z" | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_1mm_many_negatives_spanpred_margin_avg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_1mm_many_negatives_spanpred_margin_avg
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- Margin Accuracy: 0.8897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.3867 | 0.16 | 20 | 0.4023 | 0.8187 |
| 0.3506 | 0.32 | 40 | 0.3381 | 0.8523 |
| 0.3195 | 0.48 | 60 | 0.3096 | 0.8613 |
| 0.3052 | 0.64 | 80 | 0.2957 | 0.8640 |
| 0.2859 | 0.8 | 100 | 0.2922 | 0.8679 |
| 0.297 | 0.96 | 120 | 0.2871 | 0.8688 |
| 0.2717 | 1.12 | 140 | 0.2761 | 0.8732 |
| 0.2671 | 1.28 | 160 | 0.2751 | 0.8743 |
| 0.2677 | 1.44 | 180 | 0.2678 | 0.8757 |
| 0.2693 | 1.6 | 200 | 0.2627 | 0.8771 |
| 0.2675 | 1.76 | 220 | 0.2573 | 0.8813 |
| 0.2732 | 1.92 | 240 | 0.2546 | 0.8858 |
| 0.246 | 2.08 | 260 | 0.2478 | 0.8869 |
| 0.2355 | 2.24 | 280 | 0.2463 | 0.8871 |
| 0.2528 | 2.4 | 300 | 0.2449 | 0.8886 |
| 0.2512 | 2.56 | 320 | 0.2443 | 0.8892 |
| 0.2527 | 2.72 | 340 | 0.2441 | 0.8893 |
| 0.2346 | 2.88 | 360 | 0.2424 | 0.8895 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
vinay4141/my-pet-dog | vinay4141 | "2024-02-13T18:02:50Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-02-13T17:58:27Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by vinay4141 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21KT1A0541
Sample pictures of this concept:
|
cocoirun/Yi-Ko-6B-instruct-v1.5 | cocoirun | "2024-01-11T01:47:43Z" | 63 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-10T01:59:04Z" | ---
license: cc-by-sa-4.0
---
<h1>instruct 모델 v1.5</h1>
<b><학습 데이터 구축></b>
Open-Orca-ko 데이터를 분석하여 태스크를 추출한 뒤
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
약 4만건(역사, 과학, 수학, 기계독해, 리뷰 분석) 구축하였고,
그 외에 Open-Orca-Ko에서 데이터를 일부 필터링하여 정제해거나 KoBEST 데이터를 함께 추가하였습니다.
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터를 파파고를 통해 번역 및 오역된 부분을 사람이 직접 수정 하는 작업을 수행
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 11만개의 학습데이터로 sft를 진행하였습니다.
<br>
현재, 새로운 버전의 모델 학습 및 성능을 위해 Open-Orca 데이터셋 일부를 번역하여 정제 중에 있습니다.
<br>
+ 고등학교 역사 문제 및 TruthfulQA 관련 문제 추가를 진행하였습니다.
+ 각종 it 지식 데이터 추가진행.
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다. |
yamatazen/Aurora-SCE-12B-Q4_K_M-GGUF | yamatazen | "2025-03-01T08:26:27Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:yamatazen/Aurora-SCE-12B",
"base_model:quantized:yamatazen/Aurora-SCE-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-01T08:25:54Z" | ---
base_model: yamatazen/Aurora-SCE-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# yamatazen/Aurora-SCE-12B-Q4_K_M-GGUF
This model was converted to GGUF format from [`yamatazen/Aurora-SCE-12B`](https://huggingface.co/yamatazen/Aurora-SCE-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yamatazen/Aurora-SCE-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo yamatazen/Aurora-SCE-12B-Q4_K_M-GGUF --hf-file aurora-sce-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo yamatazen/Aurora-SCE-12B-Q4_K_M-GGUF --hf-file aurora-sce-12b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo yamatazen/Aurora-SCE-12B-Q4_K_M-GGUF --hf-file aurora-sce-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo yamatazen/Aurora-SCE-12B-Q4_K_M-GGUF --hf-file aurora-sce-12b-q4_k_m.gguf -c 2048
```
|
dyamagishi/output | dyamagishi | "2024-06-20T09:01:10Z" | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:cagliostrolab/animagine-xl-3.1",
"base_model:adapter:cagliostrolab/animagine-xl-3.1",
"license:openrail++",
"region:us"
] | text-to-image | "2024-06-14T04:51:58Z" | ---
base_model: cagliostrolab/animagine-xl-3.1
library_name: diffusers
license: openrail++
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-dyamagishi/output
These are controlnet weights trained on cagliostrolab/animagine-xl-3.1 with new type of conditioning.
You can find some example images below.
prompt: outdoors, scenery, cloud, multiple_girls, sky, day, tree, grass, architecture, 2girls, blue_sky, building, standing, skirt, long_hair, mountain, east_asian_architecture, from_behind, castle, facing_away, black_skirt, school_uniform, pagoda, waterfall, white_shirt, white_hair, shirt, cloudy_sky, bag

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k3_task5_organization | MayBashendy | "2025-01-19T18:06:45Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-19T09:15:17Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k3_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k3_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6282
- Qwk: 0.6350
- Mse: 0.6282
- Rmse: 0.7926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.1176 | 2 | 3.9697 | 0.0013 | 3.9697 | 1.9924 |
| No log | 0.2353 | 4 | 2.2646 | 0.0215 | 2.2646 | 1.5049 |
| No log | 0.3529 | 6 | 2.3084 | -0.0234 | 2.3084 | 1.5193 |
| No log | 0.4706 | 8 | 1.7808 | 0.0498 | 1.7808 | 1.3345 |
| No log | 0.5882 | 10 | 1.1342 | 0.1218 | 1.1342 | 1.0650 |
| No log | 0.7059 | 12 | 1.1609 | 0.0886 | 1.1609 | 1.0774 |
| No log | 0.8235 | 14 | 1.2450 | 0.0242 | 1.2450 | 1.1158 |
| No log | 0.9412 | 16 | 1.2449 | 0.0065 | 1.2449 | 1.1157 |
| No log | 1.0588 | 18 | 1.1698 | 0.0941 | 1.1698 | 1.0816 |
| No log | 1.1765 | 20 | 1.0975 | 0.2049 | 1.0975 | 1.0476 |
| No log | 1.2941 | 22 | 1.1091 | 0.2004 | 1.1091 | 1.0531 |
| No log | 1.4118 | 24 | 1.1335 | 0.1618 | 1.1335 | 1.0646 |
| No log | 1.5294 | 26 | 1.0693 | 0.2343 | 1.0693 | 1.0341 |
| No log | 1.6471 | 28 | 1.0327 | 0.2023 | 1.0327 | 1.0162 |
| No log | 1.7647 | 30 | 0.9358 | 0.2935 | 0.9358 | 0.9674 |
| No log | 1.8824 | 32 | 0.9041 | 0.3162 | 0.9041 | 0.9509 |
| No log | 2.0 | 34 | 0.9153 | 0.3156 | 0.9153 | 0.9567 |
| No log | 2.1176 | 36 | 1.0419 | 0.2593 | 1.0419 | 1.0207 |
| No log | 2.2353 | 38 | 1.2359 | 0.0820 | 1.2359 | 1.1117 |
| No log | 2.3529 | 40 | 1.4581 | 0.0627 | 1.4581 | 1.2075 |
| No log | 2.4706 | 42 | 1.6898 | 0.1109 | 1.6898 | 1.2999 |
| No log | 2.5882 | 44 | 1.6017 | 0.1110 | 1.6017 | 1.2656 |
| No log | 2.7059 | 46 | 1.2895 | 0.1138 | 1.2895 | 1.1356 |
| No log | 2.8235 | 48 | 0.9936 | 0.3026 | 0.9936 | 0.9968 |
| No log | 2.9412 | 50 | 0.8775 | 0.3562 | 0.8775 | 0.9368 |
| No log | 3.0588 | 52 | 0.9324 | 0.3001 | 0.9324 | 0.9656 |
| No log | 3.1765 | 54 | 1.0120 | 0.2249 | 1.0120 | 1.0060 |
| No log | 3.2941 | 56 | 1.0148 | 0.3385 | 1.0148 | 1.0074 |
| No log | 3.4118 | 58 | 0.9727 | 0.3063 | 0.9727 | 0.9863 |
| No log | 3.5294 | 60 | 0.9158 | 0.2897 | 0.9158 | 0.9570 |
| No log | 3.6471 | 62 | 0.9292 | 0.3229 | 0.9292 | 0.9639 |
| No log | 3.7647 | 64 | 0.9284 | 0.3229 | 0.9284 | 0.9636 |
| No log | 3.8824 | 66 | 0.9328 | 0.3499 | 0.9328 | 0.9658 |
| No log | 4.0 | 68 | 1.0212 | 0.2635 | 1.0212 | 1.0105 |
| No log | 4.1176 | 70 | 1.0720 | 0.2969 | 1.0720 | 1.0354 |
| No log | 4.2353 | 72 | 1.0179 | 0.2827 | 1.0179 | 1.0089 |
| No log | 4.3529 | 74 | 0.9296 | 0.2981 | 0.9296 | 0.9642 |
| No log | 4.4706 | 76 | 0.8652 | 0.4148 | 0.8652 | 0.9302 |
| No log | 4.5882 | 78 | 0.8682 | 0.3815 | 0.8682 | 0.9318 |
| No log | 4.7059 | 80 | 0.8796 | 0.3304 | 0.8796 | 0.9379 |
| No log | 4.8235 | 82 | 0.8739 | 0.2770 | 0.8739 | 0.9348 |
| No log | 4.9412 | 84 | 0.8458 | 0.3837 | 0.8458 | 0.9197 |
| No log | 5.0588 | 86 | 0.8459 | 0.4 | 0.8459 | 0.9197 |
| No log | 5.1765 | 88 | 0.9192 | 0.4024 | 0.9192 | 0.9587 |
| No log | 5.2941 | 90 | 0.9950 | 0.3608 | 0.9950 | 0.9975 |
| No log | 5.4118 | 92 | 1.0091 | 0.3885 | 1.0091 | 1.0045 |
| No log | 5.5294 | 94 | 1.0118 | 0.3627 | 1.0118 | 1.0059 |
| No log | 5.6471 | 96 | 0.9435 | 0.3830 | 0.9435 | 0.9714 |
| No log | 5.7647 | 98 | 0.8433 | 0.4857 | 0.8433 | 0.9183 |
| No log | 5.8824 | 100 | 0.8067 | 0.4138 | 0.8067 | 0.8982 |
| No log | 6.0 | 102 | 0.8797 | 0.3358 | 0.8797 | 0.9379 |
| No log | 6.1176 | 104 | 0.9180 | 0.3740 | 0.9180 | 0.9581 |
| No log | 6.2353 | 106 | 0.9370 | 0.3836 | 0.9370 | 0.9680 |
| No log | 6.3529 | 108 | 0.9614 | 0.4249 | 0.9614 | 0.9805 |
| No log | 6.4706 | 110 | 0.9517 | 0.4557 | 0.9517 | 0.9756 |
| No log | 6.5882 | 112 | 0.9785 | 0.4162 | 0.9785 | 0.9892 |
| No log | 6.7059 | 114 | 0.9699 | 0.4573 | 0.9699 | 0.9848 |
| No log | 6.8235 | 116 | 0.8179 | 0.4893 | 0.8179 | 0.9044 |
| No log | 6.9412 | 118 | 0.7453 | 0.5575 | 0.7453 | 0.8633 |
| No log | 7.0588 | 120 | 0.7743 | 0.4269 | 0.7743 | 0.8800 |
| No log | 7.1765 | 122 | 0.7256 | 0.4223 | 0.7256 | 0.8518 |
| No log | 7.2941 | 124 | 0.6922 | 0.4893 | 0.6922 | 0.8320 |
| No log | 7.4118 | 126 | 0.6867 | 0.5010 | 0.6867 | 0.8287 |
| No log | 7.5294 | 128 | 0.6869 | 0.5260 | 0.6869 | 0.8288 |
| No log | 7.6471 | 130 | 0.6811 | 0.5516 | 0.6811 | 0.8253 |
| No log | 7.7647 | 132 | 0.6779 | 0.5373 | 0.6779 | 0.8233 |
| No log | 7.8824 | 134 | 0.6841 | 0.5467 | 0.6841 | 0.8271 |
| No log | 8.0 | 136 | 0.6963 | 0.5894 | 0.6963 | 0.8345 |
| No log | 8.1176 | 138 | 0.6850 | 0.5434 | 0.6850 | 0.8277 |
| No log | 8.2353 | 140 | 0.6683 | 0.5450 | 0.6683 | 0.8175 |
| No log | 8.3529 | 142 | 0.6609 | 0.6417 | 0.6609 | 0.8130 |
| No log | 8.4706 | 144 | 0.6626 | 0.6409 | 0.6626 | 0.8140 |
| No log | 8.5882 | 146 | 0.6498 | 0.6409 | 0.6498 | 0.8061 |
| No log | 8.7059 | 148 | 0.7214 | 0.4924 | 0.7214 | 0.8493 |
| No log | 8.8235 | 150 | 0.9014 | 0.4654 | 0.9014 | 0.9494 |
| No log | 8.9412 | 152 | 0.8355 | 0.4340 | 0.8355 | 0.9141 |
| No log | 9.0588 | 154 | 0.8259 | 0.4340 | 0.8259 | 0.9088 |
| No log | 9.1765 | 156 | 0.7165 | 0.5046 | 0.7165 | 0.8465 |
| No log | 9.2941 | 158 | 0.6353 | 0.6407 | 0.6353 | 0.7970 |
| No log | 9.4118 | 160 | 0.6342 | 0.6262 | 0.6342 | 0.7963 |
| No log | 9.5294 | 162 | 0.6608 | 0.5785 | 0.6608 | 0.8129 |
| No log | 9.6471 | 164 | 0.6847 | 0.5342 | 0.6847 | 0.8275 |
| No log | 9.7647 | 166 | 0.6554 | 0.6113 | 0.6554 | 0.8095 |
| No log | 9.8824 | 168 | 0.6222 | 0.6460 | 0.6222 | 0.7888 |
| No log | 10.0 | 170 | 0.6474 | 0.6945 | 0.6474 | 0.8046 |
| No log | 10.1176 | 172 | 0.6556 | 0.6832 | 0.6556 | 0.8097 |
| No log | 10.2353 | 174 | 0.6394 | 0.7009 | 0.6394 | 0.7996 |
| No log | 10.3529 | 176 | 0.6608 | 0.6187 | 0.6608 | 0.8129 |
| No log | 10.4706 | 178 | 0.6802 | 0.5740 | 0.6802 | 0.8247 |
| No log | 10.5882 | 180 | 0.6627 | 0.5657 | 0.6627 | 0.8141 |
| No log | 10.7059 | 182 | 0.6645 | 0.6824 | 0.6645 | 0.8151 |
| No log | 10.8235 | 184 | 0.6776 | 0.5588 | 0.6776 | 0.8231 |
| No log | 10.9412 | 186 | 0.6713 | 0.5588 | 0.6713 | 0.8193 |
| No log | 11.0588 | 188 | 0.6527 | 0.6556 | 0.6527 | 0.8079 |
| No log | 11.1765 | 190 | 0.6518 | 0.5614 | 0.6518 | 0.8074 |
| No log | 11.2941 | 192 | 0.6396 | 0.5939 | 0.6396 | 0.7998 |
| No log | 11.4118 | 194 | 0.6254 | 0.6584 | 0.6254 | 0.7909 |
| No log | 11.5294 | 196 | 0.6101 | 0.6796 | 0.6101 | 0.7811 |
| No log | 11.6471 | 198 | 0.6080 | 0.6911 | 0.6080 | 0.7797 |
| No log | 11.7647 | 200 | 0.6058 | 0.6796 | 0.6058 | 0.7784 |
| No log | 11.8824 | 202 | 0.6061 | 0.6911 | 0.6061 | 0.7785 |
| No log | 12.0 | 204 | 0.6040 | 0.7017 | 0.6040 | 0.7771 |
| No log | 12.1176 | 206 | 0.6072 | 0.6903 | 0.6072 | 0.7792 |
| No log | 12.2353 | 208 | 0.6071 | 0.6805 | 0.6071 | 0.7792 |
| No log | 12.3529 | 210 | 0.6259 | 0.6750 | 0.6259 | 0.7912 |
| No log | 12.4706 | 212 | 0.6027 | 0.7083 | 0.6027 | 0.7763 |
| No log | 12.5882 | 214 | 0.6031 | 0.6025 | 0.6031 | 0.7766 |
| No log | 12.7059 | 216 | 0.6028 | 0.6616 | 0.6028 | 0.7764 |
| No log | 12.8235 | 218 | 0.6098 | 0.6897 | 0.6098 | 0.7809 |
| No log | 12.9412 | 220 | 0.6169 | 0.7238 | 0.6169 | 0.7855 |
| No log | 13.0588 | 222 | 0.6377 | 0.6252 | 0.6377 | 0.7985 |
| No log | 13.1765 | 224 | 0.6527 | 0.5894 | 0.6527 | 0.8079 |
| No log | 13.2941 | 226 | 0.7011 | 0.5368 | 0.7011 | 0.8373 |
| No log | 13.4118 | 228 | 0.6783 | 0.5810 | 0.6783 | 0.8236 |
| No log | 13.5294 | 230 | 0.6783 | 0.6315 | 0.6783 | 0.8236 |
| No log | 13.6471 | 232 | 0.6539 | 0.6655 | 0.6539 | 0.8086 |
| No log | 13.7647 | 234 | 0.6100 | 0.7116 | 0.6100 | 0.7810 |
| No log | 13.8824 | 236 | 0.5989 | 0.6736 | 0.5989 | 0.7739 |
| No log | 14.0 | 238 | 0.5757 | 0.7179 | 0.5757 | 0.7587 |
| No log | 14.1176 | 240 | 0.5598 | 0.7089 | 0.5598 | 0.7482 |
| No log | 14.2353 | 242 | 0.5709 | 0.6905 | 0.5709 | 0.7556 |
| No log | 14.3529 | 244 | 0.5899 | 0.6720 | 0.5899 | 0.7680 |
| No log | 14.4706 | 246 | 0.5797 | 0.6598 | 0.5797 | 0.7614 |
| No log | 14.5882 | 248 | 0.6415 | 0.5410 | 0.6415 | 0.8009 |
| No log | 14.7059 | 250 | 0.7028 | 0.5160 | 0.7028 | 0.8383 |
| No log | 14.8235 | 252 | 0.6430 | 0.5317 | 0.6430 | 0.8019 |
| No log | 14.9412 | 254 | 0.5896 | 0.6353 | 0.5896 | 0.7679 |
| No log | 15.0588 | 256 | 0.6091 | 0.6287 | 0.6091 | 0.7805 |
| No log | 15.1765 | 258 | 0.5966 | 0.6676 | 0.5966 | 0.7724 |
| No log | 15.2941 | 260 | 0.5642 | 0.7245 | 0.5642 | 0.7511 |
| No log | 15.4118 | 262 | 0.5764 | 0.7066 | 0.5764 | 0.7592 |
| No log | 15.5294 | 264 | 0.5809 | 0.7291 | 0.5809 | 0.7621 |
| No log | 15.6471 | 266 | 0.5908 | 0.6667 | 0.5908 | 0.7687 |
| No log | 15.7647 | 268 | 0.6081 | 0.6676 | 0.6081 | 0.7798 |
| No log | 15.8824 | 270 | 0.6420 | 0.6720 | 0.6420 | 0.8012 |
| No log | 16.0 | 272 | 0.6523 | 0.7059 | 0.6523 | 0.8077 |
| No log | 16.1176 | 274 | 0.6596 | 0.6243 | 0.6596 | 0.8122 |
| No log | 16.2353 | 276 | 0.6560 | 0.6815 | 0.6560 | 0.8099 |
| No log | 16.3529 | 278 | 0.6431 | 0.6815 | 0.6431 | 0.8019 |
| No log | 16.4706 | 280 | 0.6244 | 0.6441 | 0.6244 | 0.7902 |
| No log | 16.5882 | 282 | 0.6084 | 0.6733 | 0.6084 | 0.7800 |
| No log | 16.7059 | 284 | 0.6042 | 0.6556 | 0.6042 | 0.7773 |
| No log | 16.8235 | 286 | 0.6066 | 0.6676 | 0.6066 | 0.7788 |
| No log | 16.9412 | 288 | 0.5989 | 0.7016 | 0.5989 | 0.7739 |
| No log | 17.0588 | 290 | 0.5974 | 0.7409 | 0.5974 | 0.7729 |
| No log | 17.1765 | 292 | 0.6050 | 0.6667 | 0.6050 | 0.7778 |
| No log | 17.2941 | 294 | 0.6021 | 0.7389 | 0.6021 | 0.7759 |
| No log | 17.4118 | 296 | 0.6012 | 0.6728 | 0.6012 | 0.7754 |
| No log | 17.5294 | 298 | 0.5829 | 0.7291 | 0.5829 | 0.7634 |
| No log | 17.6471 | 300 | 0.5989 | 0.6676 | 0.5989 | 0.7739 |
| No log | 17.7647 | 302 | 0.6133 | 0.6287 | 0.6133 | 0.7831 |
| No log | 17.8824 | 304 | 0.5963 | 0.6676 | 0.5963 | 0.7722 |
| No log | 18.0 | 306 | 0.5748 | 0.7298 | 0.5748 | 0.7582 |
| No log | 18.1176 | 308 | 0.5777 | 0.6970 | 0.5777 | 0.7601 |
| No log | 18.2353 | 310 | 0.5759 | 0.6866 | 0.5759 | 0.7589 |
| No log | 18.3529 | 312 | 0.5787 | 0.7083 | 0.5787 | 0.7607 |
| No log | 18.4706 | 314 | 0.5901 | 0.6337 | 0.5901 | 0.7682 |
| No log | 18.5882 | 316 | 0.5804 | 0.6858 | 0.5804 | 0.7618 |
| No log | 18.7059 | 318 | 0.5724 | 0.7459 | 0.5724 | 0.7565 |
| No log | 18.8235 | 320 | 0.5917 | 0.7216 | 0.5917 | 0.7692 |
| No log | 18.9412 | 322 | 0.5941 | 0.7101 | 0.5941 | 0.7708 |
| No log | 19.0588 | 324 | 0.5847 | 0.7446 | 0.5847 | 0.7646 |
| No log | 19.1765 | 326 | 0.5893 | 0.6888 | 0.5893 | 0.7676 |
| No log | 19.2941 | 328 | 0.6110 | 0.6881 | 0.6110 | 0.7817 |
| No log | 19.4118 | 330 | 0.6176 | 0.6881 | 0.6176 | 0.7859 |
| No log | 19.5294 | 332 | 0.5915 | 0.7292 | 0.5915 | 0.7691 |
| No log | 19.6471 | 334 | 0.5925 | 0.7452 | 0.5925 | 0.7697 |
| No log | 19.7647 | 336 | 0.6126 | 0.7349 | 0.6126 | 0.7827 |
| No log | 19.8824 | 338 | 0.6725 | 0.5601 | 0.6725 | 0.8201 |
| No log | 20.0 | 340 | 0.7007 | 0.5601 | 0.7007 | 0.8371 |
| No log | 20.1176 | 342 | 0.6530 | 0.5588 | 0.6530 | 0.8081 |
| No log | 20.2353 | 344 | 0.6196 | 0.7066 | 0.6196 | 0.7871 |
| No log | 20.3529 | 346 | 0.6213 | 0.6053 | 0.6213 | 0.7882 |
| No log | 20.4706 | 348 | 0.6029 | 0.6954 | 0.6029 | 0.7764 |
| No log | 20.5882 | 350 | 0.5777 | 0.7508 | 0.5777 | 0.7601 |
| No log | 20.7059 | 352 | 0.5924 | 0.7579 | 0.5924 | 0.7697 |
| No log | 20.8235 | 354 | 0.6216 | 0.5684 | 0.6216 | 0.7884 |
| No log | 20.9412 | 356 | 0.6135 | 0.6312 | 0.6135 | 0.7832 |
| No log | 21.0588 | 358 | 0.5625 | 0.7520 | 0.5625 | 0.7500 |
| No log | 21.1765 | 360 | 0.5487 | 0.7508 | 0.5487 | 0.7408 |
| No log | 21.2941 | 362 | 0.5553 | 0.7508 | 0.5553 | 0.7452 |
| No log | 21.4118 | 364 | 0.5671 | 0.7514 | 0.5671 | 0.7531 |
| No log | 21.5294 | 366 | 0.5681 | 0.7573 | 0.5681 | 0.7537 |
| No log | 21.6471 | 368 | 0.5772 | 0.7573 | 0.5772 | 0.7597 |
| No log | 21.7647 | 370 | 0.5887 | 0.7179 | 0.5887 | 0.7673 |
| No log | 21.8824 | 372 | 0.6108 | 0.6946 | 0.6108 | 0.7815 |
| No log | 22.0 | 374 | 0.6114 | 0.6762 | 0.6114 | 0.7819 |
| No log | 22.1176 | 376 | 0.5952 | 0.7001 | 0.5952 | 0.7715 |
| No log | 22.2353 | 378 | 0.5858 | 0.7001 | 0.5858 | 0.7653 |
| No log | 22.3529 | 380 | 0.5867 | 0.7514 | 0.5867 | 0.7660 |
| No log | 22.4706 | 382 | 0.5938 | 0.7514 | 0.5938 | 0.7706 |
| No log | 22.5882 | 384 | 0.5915 | 0.7619 | 0.5915 | 0.7691 |
| No log | 22.7059 | 386 | 0.5950 | 0.7179 | 0.5950 | 0.7714 |
| No log | 22.8235 | 388 | 0.5996 | 0.7508 | 0.5996 | 0.7743 |
| No log | 22.9412 | 390 | 0.6091 | 0.6886 | 0.6091 | 0.7805 |
| No log | 23.0588 | 392 | 0.6075 | 0.7238 | 0.6075 | 0.7794 |
| No log | 23.1765 | 394 | 0.6065 | 0.7292 | 0.6065 | 0.7788 |
| No log | 23.2941 | 396 | 0.5977 | 0.7183 | 0.5977 | 0.7731 |
| No log | 23.4118 | 398 | 0.5840 | 0.7403 | 0.5840 | 0.7642 |
| No log | 23.5294 | 400 | 0.5747 | 0.7291 | 0.5747 | 0.7581 |
| No log | 23.6471 | 402 | 0.5776 | 0.7291 | 0.5776 | 0.7600 |
| No log | 23.7647 | 404 | 0.5756 | 0.6962 | 0.5756 | 0.7587 |
| No log | 23.8824 | 406 | 0.5589 | 0.6962 | 0.5589 | 0.7476 |
| No log | 24.0 | 408 | 0.5466 | 0.7066 | 0.5466 | 0.7393 |
| No log | 24.1176 | 410 | 0.5559 | 0.7396 | 0.5559 | 0.7456 |
| No log | 24.2353 | 412 | 0.5845 | 0.6742 | 0.5845 | 0.7645 |
| No log | 24.3529 | 414 | 0.6048 | 0.6409 | 0.6048 | 0.7777 |
| No log | 24.4706 | 416 | 0.5816 | 0.7025 | 0.5816 | 0.7626 |
| No log | 24.5882 | 418 | 0.5783 | 0.7074 | 0.5783 | 0.7604 |
| No log | 24.7059 | 420 | 0.5839 | 0.7017 | 0.5839 | 0.7641 |
| No log | 24.8235 | 422 | 0.6147 | 0.6292 | 0.6147 | 0.7840 |
| No log | 24.9412 | 424 | 0.6316 | 0.6409 | 0.6316 | 0.7947 |
| No log | 25.0588 | 426 | 0.6277 | 0.6958 | 0.6277 | 0.7923 |
| No log | 25.1765 | 428 | 0.6476 | 0.5844 | 0.6476 | 0.8047 |
| No log | 25.2941 | 430 | 0.6874 | 0.5054 | 0.6874 | 0.8291 |
| No log | 25.4118 | 432 | 0.7072 | 0.5516 | 0.7072 | 0.8410 |
| No log | 25.5294 | 434 | 0.7103 | 0.5472 | 0.7103 | 0.8428 |
| No log | 25.6471 | 436 | 0.7145 | 0.5746 | 0.7145 | 0.8453 |
| No log | 25.7647 | 438 | 0.7021 | 0.5847 | 0.7021 | 0.8379 |
| No log | 25.8824 | 440 | 0.6862 | 0.5688 | 0.6862 | 0.8284 |
| No log | 26.0 | 442 | 0.6873 | 0.5654 | 0.6873 | 0.8290 |
| No log | 26.1176 | 444 | 0.7017 | 0.5033 | 0.7017 | 0.8377 |
| No log | 26.2353 | 446 | 0.6976 | 0.5033 | 0.6976 | 0.8352 |
| No log | 26.3529 | 448 | 0.6720 | 0.5728 | 0.6720 | 0.8198 |
| No log | 26.4706 | 450 | 0.6465 | 0.5858 | 0.6465 | 0.8040 |
| No log | 26.5882 | 452 | 0.6337 | 0.5774 | 0.6337 | 0.7960 |
| No log | 26.7059 | 454 | 0.6093 | 0.6588 | 0.6093 | 0.7806 |
| No log | 26.8235 | 456 | 0.5986 | 0.6954 | 0.5986 | 0.7737 |
| No log | 26.9412 | 458 | 0.5990 | 0.7051 | 0.5990 | 0.7740 |
| No log | 27.0588 | 460 | 0.5940 | 0.6719 | 0.5940 | 0.7707 |
| No log | 27.1765 | 462 | 0.5621 | 0.7059 | 0.5621 | 0.7497 |
| No log | 27.2941 | 464 | 0.5496 | 0.7403 | 0.5496 | 0.7414 |
| No log | 27.4118 | 466 | 0.5791 | 0.6139 | 0.5791 | 0.7610 |
| No log | 27.5294 | 468 | 0.6119 | 0.6172 | 0.6119 | 0.7822 |
| No log | 27.6471 | 470 | 0.6103 | 0.5962 | 0.6103 | 0.7812 |
| No log | 27.7647 | 472 | 0.5748 | 0.6447 | 0.5748 | 0.7581 |
| No log | 27.8824 | 474 | 0.5403 | 0.7291 | 0.5403 | 0.7351 |
| No log | 28.0 | 476 | 0.5470 | 0.7284 | 0.5470 | 0.7396 |
| No log | 28.1176 | 478 | 0.5614 | 0.6623 | 0.5614 | 0.7493 |
| No log | 28.2353 | 480 | 0.5602 | 0.7291 | 0.5602 | 0.7484 |
| No log | 28.3529 | 482 | 0.5685 | 0.7514 | 0.5685 | 0.7540 |
| No log | 28.4706 | 484 | 0.5759 | 0.7403 | 0.5759 | 0.7589 |
| No log | 28.5882 | 486 | 0.5710 | 0.7291 | 0.5710 | 0.7557 |
| No log | 28.7059 | 488 | 0.5695 | 0.6519 | 0.5695 | 0.7547 |
| No log | 28.8235 | 490 | 0.5691 | 0.6623 | 0.5691 | 0.7544 |
| No log | 28.9412 | 492 | 0.5800 | 0.6724 | 0.5800 | 0.7615 |
| No log | 29.0588 | 494 | 0.5885 | 0.7059 | 0.5885 | 0.7671 |
| No log | 29.1765 | 496 | 0.5781 | 0.6623 | 0.5781 | 0.7603 |
| No log | 29.2941 | 498 | 0.5782 | 0.6728 | 0.5782 | 0.7604 |
| 0.3465 | 29.4118 | 500 | 0.5875 | 0.6623 | 0.5875 | 0.7665 |
| 0.3465 | 29.5294 | 502 | 0.5793 | 0.6954 | 0.5793 | 0.7611 |
| 0.3465 | 29.6471 | 504 | 0.5752 | 0.7284 | 0.5752 | 0.7584 |
| 0.3465 | 29.7647 | 506 | 0.5748 | 0.6623 | 0.5748 | 0.7582 |
| 0.3465 | 29.8824 | 508 | 0.5617 | 0.6623 | 0.5617 | 0.7495 |
| 0.3465 | 30.0 | 510 | 0.5382 | 0.7179 | 0.5382 | 0.7336 |
| 0.3465 | 30.1176 | 512 | 0.5240 | 0.7403 | 0.5240 | 0.7239 |
| 0.3465 | 30.2353 | 514 | 0.5237 | 0.7403 | 0.5237 | 0.7237 |
| 0.3465 | 30.3529 | 516 | 0.5294 | 0.7514 | 0.5294 | 0.7276 |
| 0.3465 | 30.4706 | 518 | 0.5356 | 0.7625 | 0.5356 | 0.7318 |
| 0.3465 | 30.5882 | 520 | 0.5352 | 0.7514 | 0.5352 | 0.7316 |
| 0.3465 | 30.7059 | 522 | 0.5361 | 0.7396 | 0.5361 | 0.7322 |
| 0.3465 | 30.8235 | 524 | 0.5452 | 0.6833 | 0.5452 | 0.7384 |
| 0.3465 | 30.9412 | 526 | 0.5470 | 0.6950 | 0.5470 | 0.7396 |
| 0.3465 | 31.0588 | 528 | 0.5549 | 0.7067 | 0.5549 | 0.7449 |
| 0.3465 | 31.1765 | 530 | 0.5689 | 0.6485 | 0.5689 | 0.7543 |
| 0.3465 | 31.2941 | 532 | 0.5566 | 0.6897 | 0.5566 | 0.7460 |
| 0.3465 | 31.4118 | 534 | 0.5447 | 0.6950 | 0.5447 | 0.7380 |
| 0.3465 | 31.5294 | 536 | 0.5469 | 0.7183 | 0.5469 | 0.7395 |
| 0.3465 | 31.6471 | 538 | 0.5613 | 0.7126 | 0.5613 | 0.7492 |
| 0.3465 | 31.7647 | 540 | 0.5637 | 0.7126 | 0.5637 | 0.7508 |
| 0.3465 | 31.8824 | 542 | 0.5615 | 0.7396 | 0.5615 | 0.7493 |
| 0.3465 | 32.0 | 544 | 0.5884 | 0.6946 | 0.5884 | 0.7671 |
| 0.3465 | 32.1176 | 546 | 0.6062 | 0.5948 | 0.6062 | 0.7786 |
| 0.3465 | 32.2353 | 548 | 0.6102 | 0.6092 | 0.6102 | 0.7812 |
| 0.3465 | 32.3529 | 550 | 0.6185 | 0.6407 | 0.6185 | 0.7864 |
| 0.3465 | 32.4706 | 552 | 0.6236 | 0.6407 | 0.6236 | 0.7897 |
| 0.3465 | 32.5882 | 554 | 0.6282 | 0.6350 | 0.6282 | 0.7926 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
samzirbo/gendered_balanced | samzirbo | "2024-05-07T19:34:20Z" | 23 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:samzirbo/mT5.en-es.pretrained",
"base_model:finetune:samzirbo/mT5.en-es.pretrained",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-07T15:49:19Z" | ---
base_model: samzirbo/mT5.en-es.pretrained
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: gendered_balanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gendered_balanced
This model is a fine-tuned version of [samzirbo/mT5.en-es.pretrained](https://huggingface.co/samzirbo/mT5.en-es.pretrained) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1770
- Bleu: 43.8724
- Meteor: 0.6916
- Chrf++: 62.661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- training_steps: 50000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Chrf++ |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|
| 4.3102 | 0.44 | 2500 | 2.0045 | 27.8405 | 0.5556 | 49.2062 |
| 2.408 | 0.88 | 5000 | 1.7104 | 33.3638 | 0.6039 | 54.2966 |
| 2.1298 | 1.31 | 7500 | 1.5654 | 36.5253 | 0.6312 | 56.719 |
| 1.9966 | 1.75 | 10000 | 1.4823 | 37.7206 | 0.6409 | 57.8933 |
| 1.8882 | 2.19 | 12500 | 1.4168 | 39.2333 | 0.6518 | 59.0052 |
| 1.8055 | 2.63 | 15000 | 1.3736 | 39.9916 | 0.6585 | 59.5057 |
| 1.756 | 3.06 | 17500 | 1.3454 | 40.4973 | 0.6622 | 59.9928 |
| 1.6829 | 3.5 | 20000 | 1.3109 | 41.6502 | 0.6732 | 60.8574 |
| 1.6575 | 3.94 | 22500 | 1.2729 | 41.7069 | 0.6729 | 60.9659 |
| 1.5944 | 4.38 | 25000 | 1.2549 | 42.2762 | 0.6783 | 61.5535 |
| 1.5758 | 4.81 | 27500 | 1.2349 | 42.6516 | 0.6803 | 61.6749 |
| 1.5407 | 5.25 | 30000 | 1.2231 | 42.6637 | 0.6817 | 61.7779 |
| 1.5146 | 5.69 | 32500 | 1.2082 | 42.8507 | 0.6856 | 61.9916 |
| 1.4953 | 6.13 | 35000 | 1.1988 | 43.387 | 0.6873 | 62.2332 |
| 1.4687 | 6.56 | 37500 | 1.1874 | 43.4174 | 0.6876 | 62.3151 |
| 1.4664 | 7.0 | 40000 | 1.1825 | 43.7501 | 0.6913 | 62.5472 |
| 1.4415 | 7.44 | 42500 | 1.1803 | 43.7744 | 0.6909 | 62.6208 |
| 1.4408 | 7.88 | 45000 | 1.1771 | 43.8914 | 0.692 | 62.6599 |
| 1.4346 | 8.31 | 47500 | 1.1772 | 43.9709 | 0.6919 | 62.7096 |
| 1.4311 | 8.75 | 50000 | 1.1770 | 43.8724 | 0.6916 | 62.661 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
smallsuper/xlm-roberta-base-finetuned-panx-de | smallsuper | "2025-01-26T04:52:04Z" | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-03-20T19:52:12Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- type: f1
value: 0.8633929588697733
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1366
- F1: 0.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2587 | 1.0 | 525 | 0.1542 | 0.8213 |
| 0.1302 | 2.0 | 1050 | 0.1373 | 0.8546 |
| 0.0827 | 3.0 | 1575 | 0.1366 | 0.8634 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.10.3
|
danliu1226/PLM-interact-650M-humanV11 | danliu1226 | "2024-11-26T23:39:36Z" | 30 | 0 | null | [
"pytorch",
"safetensors",
"protein-protein interactions",
"paired proteins encoding",
"protein language model",
"license:mit",
"region:us"
] | null | "2024-11-06T13:02:51Z" | ---
license: mit
tags:
- protein-protein interactions
- paired proteins encoding
- protein language model
---
# PLM-interact model
PLM-interact: extending protein language models to predict protein-protein interactions
The preprint is available at [PLM-interact](https://www.biorxiv.org/content/10.1101/2024.11.05.622169v1) and the code see [github link](https://github.com/liudan111/PLM-interact)
This model is trained on human PPIs from STRING V12. For the PPI preprocessing details, see Methods
in the preprint.
## Model description
PLM-interact, goes beyond a single protein, jointly encoding protein pairs to learn their relationships,
analogous to the next-sentence prediction task from natural language processing. This approach provides
a significant improvement in performance: Trained on human-human PPIs, PLM-interact predicts mouse, fly,
worm, E. coli and yeast PPIs, with 16-28% improvements in AUPR compared with state-of-the-art PPI models.
Additionally, it can detect changes that disrupt or cause PPIs and be applied to virus-host PPI prediction.

### An example to predict interaction probability between proteins
```python
import torch
import torch.nn as nn
from transformers import AutoModel,AutoModelForMaskedLM,AutoTokenizer
import os
import torch.nn.functional as F
class PLMinteract(nn.Module):
def __init__(self,model_name,num_labels,embedding_size):
super(PLMinteract,self).__init__()
self.esm_mask = AutoModelForMaskedLM.from_pretrained(model_name)
self.embedding_size=embedding_size
self.classifier = nn.Linear(embedding_size,1) # embedding_size
self.num_labels=num_labels
def forward_test(self,features):
embedding_output = self.esm_mask.base_model(**features, return_dict=True)
embedding=embedding_output.last_hidden_state[:,0,:] #cls token
embedding = F.relu(embedding)
logits = self.classifier(embedding)
logits=logits.view(-1)
probability = torch.sigmoid(logits)
return probability
# folder_huggingface_download : the download model from huggingface, such as "danliu1226/PLM-interact-650M-humanV11"
# model_name: the ESM2 model that PLM-interact trained
# embedding_size: the embedding size of ESM2 model
folder_huggingface_download='download_huggingface_folder/'
model_name= 'facebook/esm2_t33_650M_UR50D'
embedding_size =1280
protein1 ="EGCVSNLMVCNLAYSGKLEELKESILADKSLATRTDQDSRTALHWACSAGHTEIVEFLLQLGVPVNDKDDAGWSPLHIAASAGRDEIVKALLGKGAQVNAVNQNGCTPLHYAASKNRHEIAVMLLEGGANPDAKDHYEATAMHRAAAKGNLKMIHILLYYKASTNIQDTEGNTPLHLACDEERVEEAKLLVSQGASIYIENKEEKTPLQVAKGGLGLILKRMVEG"
protein2= "MGQSQSGGHGPGGGKKDDKDKKKKYEPPVPTRVGKKKKKTKGPDAASKLPLVTPHTQCRLKLLKLERIKDYLLMEEEFIRNQEQMKPLEEKQEEERSKVDDLRGTPMSVGTLEEIIDDNHAIVSTSVGSEHYVSILSFVDKDLLEPGCSVLLNHKVHAVIGVLMDDTDPLVTVMKVEKAPQETYADIGGLDNQIQEIKESVELPLTHPEYYEEMGIKPPKGVILYGPPGTGKTLLAKAVANQTSATFLRVVGSELIQKYLGDGPKLVRELFRVAEEHAPSIVFIDEIDAIGTKRYDSNSGGEREIQRTMLELLNQLDGFDSRGDVKVIMATNRIETLDPALIRPGRIDRKIEFPLPDEKTKKRIFQIHTSRMTLADDVTLDDLIMAKDDLSGADIKAICTEAGLMALRERRMKVTNEDFKKSKENVLYKKQEGTPEGLYL"
DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained(model_name)
PLMinter= PLMinteract(model_name, 1, embedding_size)
load_model = torch.load(f"{folder_huggingface_download}pytorch_model.bin")
PLMinter.load_state_dict(load_model)
texts=[protein1, protein2]
tokenized = tokenizer(*texts, padding=True, truncation='longest_first', return_tensors="pt", max_length=1603)
tokenized = tokenized.to(DEVICE)
PLMinter.eval()
PLMinter.to(DEVICE)
with torch.no_grad():
probability = PLMinter.forward_test(tokenized)
print(probability.item())
```
## Training dataset
This model checkpoint is trained on the benchmarking human PPIs from https://d-script.readthedocs.io/en/stable/data.html
|
AIDC-AI/Ovis1.5-Gemma2-9B | AIDC-AI | "2025-02-26T10:49:41Z" | 91 | 19 | transformers | [
"transformers",
"safetensors",
"ovis",
"text-generation",
"MLLM",
"image-text-to-text",
"conversational",
"custom_code",
"dataset:AIDC-AI/Ovis-dataset",
"arxiv:2405.20797",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-text-to-text | "2024-07-30T02:04:12Z" | ---
license: apache-2.0
datasets:
- AIDC-AI/Ovis-dataset
library_name: transformers
tags:
- MLLM
pipeline_tag: image-text-to-text
---
<span style="color: #ED7D31; font-size: 24px;">It is recommended to use the latest version: [Ovis2](https://huggingface.co/collections/AIDC-AI/ovis2-67ab36c7e497429034874464).</span>
## Introduction
Ovis is a novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings. For a comprehensive introduction, please refer to [Ovis paper](https://arxiv.org/abs/2405.20797) and [Ovis GitHub](https://github.com/AIDC-AI/Ovis).
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/658a8a837959448ef5500ce5/TIlymOb86R6_Mez3bpmcB.png" width="100%" />
</div>
## Model
As always, Ovis1.5 remains fully open-source: we release the training datasets, training & inference codes, and model weights for **reproducible transparency** and community collaboration.
| Ovis MLLMs | ViT | LLM | Training Datasets | Code | Model Weights |
|:-------------------------|:-----------:|:------------------:|:-------------------------------------------------------------------:|:-------------------------------------------:|:----------------------------------------------------------------:|
| Ovis1.5-Llama3-8B | Siglip-400M | Llama3-8B-Instruct | [Huggingface](https://huggingface.co/datasets/AIDC-AI/Ovis-dataset) | [Github](https://github.com/AIDC-AI/Ovis) | [Huggingface](https://huggingface.co/AIDC-AI/Ovis1.5-Llama3-8B) |
| Ovis1.5-Gemma2-9B | Siglip-400M | Gemma2-9B-It | [Huggingface](https://huggingface.co/datasets/AIDC-AI/Ovis-dataset) | [Github](https://github.com/AIDC-AI/Ovis) | [Huggingface](https://huggingface.co/AIDC-AI/Ovis1.5-Gemma2-9B) |
## Performance
We evaluate Ovis1.5 across various multimodal benchmarks using [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and compare its performance to leading MLLMs with similar parameter scales.
| | MiniCPM-Llama3-V2.5 | GLM-4V-9B | Ovis1.5-Llama3-8B | Ovis1.5-Gemma2-9B |
|:------------------|--------------------:|----------:|------------------:|------------------:|
| Open Weights | ✅ | ✅ | ✅ | ✅ |
| Open Datasets | ❌ | ❌ | ✅ | ✅ |
| MMTBench-VAL | 57.6 | 48.8 | 60.7 | **62.7** |
| MMBench-EN-V1.1 | 74 | 68.7 | **78.2** | 78.0 |
| MMBench-CN-V1.1 | 70.1 | 67.1 | **75.2** | 75.1 |
| MMStar | 51.8 | 54.8 | 57.2 | **58.7** |
| MMMU-Val | 45.8 | 46.9 | 48.6 | **49.8** |
| MathVista-Mini | 54.3 | 51.1 | 62.4 | **65.7** |
| HallusionBenchAvg | 42.4 | 45 | 44.5 | **48.0** |
| AI2D | 78.4 | 71.2 | 82.5 | **84.7** |
| OCRBench | 725 | **776** | 743 | 756 |
| MMVet | 52.8 | **58** | 52.2 | 56.5 |
| RealWorldQA | 63.5 | 66 | 64.6 | **66.9** |
| CharXiv Reasoning | 24.9 | - | 28.2 | **28.4** |
| CharXiv Descriptive | 59.3 | - | 60.2 | **62.6** |
## Usage
Below is a code snippet to run Ovis with multimodal inputs. For additional usage instructions, including inference wrapper and Gradio UI, please refer to [Ovis GitHub](https://github.com/AIDC-AI/Ovis?tab=readme-ov-file#inference).
```bash
pip install torch==2.1.2 transformers==4.43.2 pillow==10.3.0
```
```python
import torch
from PIL import Image
from transformers import AutoModelForCausalLM
# load model
model = AutoModelForCausalLM.from_pretrained("AIDC-AI/Ovis1.5-Gemma2-9B",
torch_dtype=torch.bfloat16,
multimodal_max_length=8192,
trust_remote_code=True).cuda()
text_tokenizer = model.get_text_tokenizer()
visual_tokenizer = model.get_visual_tokenizer()
conversation_formatter = model.get_conversation_formatter()
# enter image path and prompt
image_path = input("Enter image path: ")
image = Image.open(image_path)
text = input("Enter prompt: ")
query = f'<image>\n{text}'
prompt, input_ids = conversation_formatter.format_query(query)
input_ids = torch.unsqueeze(input_ids, dim=0).to(device=model.device)
attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id).to(device=model.device)
pixel_values = [visual_tokenizer.preprocess_image(image).to(
dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)]
# generate output
with torch.inference_mode():
gen_kwargs = dict(
max_new_tokens=1024,
do_sample=False,
top_p=None,
top_k=None,
temperature=None,
repetition_penalty=None,
eos_token_id=model.generation_config.eos_token_id,
pad_token_id=text_tokenizer.pad_token_id,
use_cache=True
)
output_ids = model.generate(input_ids, pixel_values=pixel_values, attention_mask=attention_mask, **gen_kwargs)[0]
output = text_tokenizer.decode(output_ids, skip_special_tokens=True)
print(f'Output: {output}')
```
## Citation
If you find Ovis useful, please cite the paper
```
@article{lu2024ovis,
title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model},
author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
year={2024},
journal={arXiv:2405.20797}
}
```
## License
The project is licensed under the Apache 2.0 License and is restricted to uses that comply with the license agreements of Gemma2 and Siglip. |
ShenaoZ/0.0005_withdpo_4iters_bs256_dpolastdataup_5551lr_iter_4 | ShenaoZ | "2024-05-09T08:12:52Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_3",
"base_model:finetune:ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_3",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-09T07:26:35Z" | ---
license: mit
base_model: ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_3
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0005_withdpo_4iters_bs256_dpolastdataup_5551lr_iter_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0005_withdpo_4iters_bs256_dpolastdataup_5551lr_iter_4
This model is a fine-tuned version of [ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_3](https://huggingface.co/ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
EnsorcelledEther/Kumiko | EnsorcelledEther | "2022-10-13T19:21:52Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2022-10-13T19:05:15Z" | ---
license: mit
---
### Kumiko on Stable Diffusion
This is the `Kumiko` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
I guess because they are png you can't see them? Idk, I'll fix it later. It's Kumiko! It looks like Kumiko.
Here is the new concept you will be able to use as a `style`:










|
jojoUla/bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-6 | jojoUla | "2023-04-24T09:50:58Z" | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-04-16T10:06:20Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-6
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-refute-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-refute-no-label-40) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8817 | 1.0 | 1 | 6.4648 |
| 7.1116 | 2.0 | 2 | 8.1799 |
| 5.2011 | 3.0 | 3 | 2.5263 |
| 3.0932 | 4.0 | 4 | 1.0840 |
| 2.5374 | 5.0 | 5 | 5.7930 |
| 2.035 | 6.0 | 6 | 0.0070 |
| 1.5178 | 7.0 | 7 | 0.9903 |
| 1.6442 | 8.0 | 8 | 0.4513 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mlabonne/Zebrafish-7B | mlabonne | "2024-04-01T12:57:38Z" | 266 | 14 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"arxiv:2403.19522",
"base_model:liminerity/M7-7b",
"base_model:finetune:liminerity/M7-7b",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-31T16:16:06Z" | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- liminerity/M7-7b
- rwitz/experiment26-truthy-iter-0
---
# Zebrafish-7B
Zebrafish-7B is my first model using the new merge method called [Model Stock](https://arxiv.org/abs/2403.19522).
Zebrafish-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [rwitz/experiment26-truthy-iter-0](https://huggingface.co/rwitz/experiment26-truthy-iter-0)
Special thanks to Charles Goddard for the quick implementation!
## 🏆 Evaluation
### Nous
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) [📄](https://gist.github.com/mlabonne/1d33c86824b3a11d2308e36db1ba41c1) | 62.74 | 45.37 | 77.01 | 78.39 | 50.2 |
| [**mlabonne/Zebrafish-7B**](https://huggingface.co/mlabonne/Zebrafish-7B) [📄](https://gist.github.com/mlabonne/719d5e106eefbcffb951b65616dcbec4) | **62.41** | **44.92** | **77.18** | **78.25** | **49.28** |
| [mlabonne/Beyonder-4x7B-v3](https://huggingface.co/mlabonne/Beyonder-4x7B-v3) [📄](https://gist.github.com/mlabonne/3740020807e559f7057c32e85ce42d92) | 61.91 | 45.85 | 76.67 | 74.98 | 50.12 |
| [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [📄](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | 60.25 | 46.06 | 76.77 | 70.32 | 47.86 |
| [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) [📄](https://gist.github.com/mlabonne/05d358e17dffdf9eee7c2322380c9da6) | 54.81 | 38.5 | 71.64 | 66.82 | 42.29 |
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: liminerity/M7-7b
- model: rwitz/experiment26-truthy-iter-0
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Zebrafish-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
zbxxx/mistral-7b-v3 | zbxxx | "2024-05-27T07:49:38Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-27T07:49:38Z" | ---
license: apache-2.0
---
|
bestaiheadshotgenerator/bestaiheadshotgenerator | bestaiheadshotgenerator | "2025-03-12T10:43:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-12T10:18:53Z" | # Best AI Headshot Generators
As a solopreneur, my face is my brand. I’ve spent years building my audience by showing up consistently, earning their trust, and keeping things real behind the scenes. It’s how I stand out in the crowded online world. High-quality images are a big part of that—they grab attention, make my content relatable, and help me connect with my people. Whether I’m posting a new article, launching a service, or pitching to brands as a strategist, I need photos that match the vibe and look professional. But perfectionism can grind everything to a halt—I don’t have time for photoshoots, and snapping selfies in ten outfits isn’t cutting it. That’s when I discovered the [best AI headshot generator](https://www.headshotpro.com/?via=joseph-gough)—HeadshotPro—and dove into the world of AI tools to solve my photo woes. With experts predicting 90% of online content will be AI-generated by 2026 and image generators growing at a wild 441% year-on-year, these tools are a game-changer for creators like me. Here’s my honest take on the top options, with HeadshotPro leading the pack.
## Why I Needed an AI Headshot Generator
Picture this: I’m racing to finish a blog post, but I need a sharp headshot to go with it. Or I’m pitching a collaboration, and my LinkedIn profile pic looks like it’s from 2015. Scheduling a photoshoot? Not happening—I’m too busy writing, strategizing, and running my business. Plus, photographers aren’t cheap, and I need variety: casual shots for Instagram, polished ones for my website, maybe something quirky for a newsletter. An **AI headshot generator** sounded like the perfect shortcut—fast, customizable, and budget-friendly. It’s all about **AI photo generation** that saves the day when perfectionism threatens to derail my momentum.
## The Lineup: Top AI Headshot Generators I Tried
I rolled up my sleeves and tested a handful of these tools. Here’s the rundown:
1. **Canva AI Headshot Generator**
- *What’s It Like?* Super simple, free with credits, and good for basic tweaks—think lighting or background swaps.
- *Best For:* Quick social media refreshes or a resume pic in a pinch.
- *Price:* Free, with optional credits if you want more.
- *My Take:* It’s fine for casual stuff, but the results feel more “eh” than “wow.” Limited customization left me wanting more.
2. **Dreamwave AI**
- *What’s It Like?* Upload 5+ photos, get 100+ headshots. Big names like Google and Apple trust it, so it’s legit.
- *Best For:* Pros who need realistic, high-quality shots.
- *Price:* Starts at $29.
- *My Take:* Impressive quality, but it’s a bit of a time commitment to upload enough photos. Solid, but not my fave.
3. **Fotor AI Headshot Generator**
- *What’s It Like?* Free, fast, and fun—upload one selfie and get up to 100 shots with styles like “business” or “holiday.”
- *Best For:* Playing around or casual headshots.
- *Price:* Free.
- *My Take:* It’s a blast for free, but some shots looked like a distant cousin, not me. Good for experimenting, not pro-level needs.
4. **HeadshotPro**
- *What’s It Like?* Upload 15 selfies, pick your vibe (backgrounds, outfits), and boom—up to 200 headshots powered by Flux AI.
- *Best For:* Anyone needing top-tier, professional headshots fast—solopreneurs, teams, you name it.
- *Price:* Starts at $29, with packages up to $59.
- *My Take:* This one stole the show. More on why in a sec.
## Why HeadshotPro Stands Out
After messing around with these tools, HeadshotPro stood out like a neon sign in a fog. Here’s why it’s my top pick:
- **Quality That Wows:** Flux AI makes these headshots so realistic, I double-checked to see if I’d secretly hired a photographer. The lighting, the details—it’s studio-level stuff.
- **Variety Galore:** I went with the $39 Professional package and got 100 headshots. Different poses, outfits, backgrounds—I had options for every platform and mood.
- **Speedy Delivery:** Two hours after uploading my selfies, I had a link to my shots. There’s even a 1-hour option if you’re in a rush.
- **Customization Heaven:** Pick your backdrop (office, outdoors, whatever) and outfit styles. I told it my height, weight, and vibe, and it nailed it.
- **Remix Magic:** Found a fave shot? Hit “Remix Photo” and get more variations of it—free. It’s like having a personal stylist tweaking your look.
- **Trust Factor:** Over 17 million headshots created, a 4.7/5 rating from nearly 2,000 reviews—this isn’t some fly-by-night app.
My experience? I uploaded 15 selfies—some with glasses, some without—and let HeadshotPro do its thing. Two hours later, I’m scrolling through a gallery of me looking like a polished pro. One shot had me with a mic, feeling like a rockstar; another was perfect for LinkedIn. The realism blew me away—my freckles, my smirk, all there. I remixed my favorite and got even more winners. For $39, it was a steal compared to a $200 photoshoot.
## My Favorite HeadshotPro Moment
Here’s a peek at my fave: me in a sleek blazer against a clean office backdrop. It’s the kind of shot I’d slap on my website and feel proud of. The AI nailed my vibe—professional but approachable. I didn’t have to fuss with lighting or iron a shirt. It just *worked*.
## When AI Headshots Shine (and When They Don’t)
These tools are gold for:
- LinkedIn glow-ups.
- Website headshots that scream “I’ve got my act together.”
- Team photos that match without herding everyone into a studio.
- Social media posts or marketing vibes on the fly.
But they’re not perfect for everything. Skip them if:
- You need a specific location (AI can’t plop you on a beach… yet).
- You’re holding a product for an ad.
- You want raw, candid vibes a photographer can capture live.
## How I Picked the Best
I judged these tools on:
- **Realism:** Did I look like me, or a plastic doll?
- **Customization:** Could I tweak it to fit my brand?
- **Ease:** Was it simple, or did I need a manual?
- **Price:** Worth the cash?
- **Speed:** How fast could I get it done?
HeadshotPro aced every test. It balances quality and convenience without breaking the bank.
## Tips for Finding Your Perfect AI Headshot Generator
- **Know Your Goal:** Casual Instagram pic or corporate headshot? Pick accordingly.
- **Test Freebies First:** Dip your toes in with free versions before spending.
- **Check the Output:** Make sure it’s high-res and pro-ready.
- **Read Reviews:** See what others say about realism and reliability.
- **Match Your Budget:** Sometimes a few extra bucks gets you way better results.
## My Verdict: HeadshotPro Wins
AI headshot generators are a lifesaver for busy creators like me. They’re hassle-free, affordable, and let me experiment until I find the perfect shot. After trying a bunch, HeadshotPro is my go-to. The **AI photo generation** is so good, I’m still shocked it’s not a real photoshoot. Whether you’re a solopreneur like me or running a team, it’s the easiest way to level up your online presence.
Ready to ditch the selfie struggle? Head to [HeadshotPro](https://www.headshotpro.com/?via=joseph-gough), upload some pics, and watch the magic happen. Trust me, your LinkedIn will thank you.
## My Top 5 AI Headshot Generators
1. **HeadshotPro** - The champ for quality, variety, and speed.
2. **Dreamwave AI** - Solid for realistic pro shots.
3. **Canva AI** - Quick and free for basic needs.
4. **Fotor AI** - Fun and free, great for casual play.
5. **Aragon.AI** - Hyper-realistic with fast results.
HeadshotPro’s my winner—try it and see why! |
shahukareem/moodhu-sea | shahukareem | "2023-01-19T19:22:19Z" | 5 | 0 | diffusers | [
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"landscape",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-19T19:20:46Z" | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- landscape
widget:
- text: A photo of moodhu sea, during sunset
---
# DreamBooth model for the moodhu concept trained by shahukareem on the shahukareem/moodhu dataset.
This is a Stable Diffusion model fine-tuned on the moodhu concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of moodhu sea**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `sea` images for the landscape theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('shahukareem/moodhu-sea')
image = pipeline().images[0]
image
```
|
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_sign_acc_ef_signal_it_273 | furrutiav | "2024-03-11T19:06:23Z" | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-03-11T19:01:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llmvetter/a2c-PandaReachDense-v3 | llmvetter | "2024-06-17T10:54:55Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-17T10:50:41Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.16 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DreadPoor/Alita99-8B-LINEAR | DreadPoor | "2025-01-19T16:00:05Z" | 12 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:DreadPoor/Heart_Stolen-8B-Model_Stock",
"base_model:merge:DreadPoor/Heart_Stolen-8B-Model_Stock",
"base_model:grimjim/Llama-3-Instruct-abliteration-LoRA-8B",
"base_model:merge:grimjim/Llama-3-Instruct-abliteration-LoRA-8B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-25T18:30:49Z" | ---
base_model:
- DreadPoor/LemonP_ALT-8B-Model_Stock
- DreadPoor/Heart_Stolen-8B-Model_Stock
- DreadPoor/Aspire_1.3-8B_model-stock
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
model-index:
- name: Alita99-8B-LINEAR
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.9
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FAlita99-8B-LINEAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FAlita99-8B-LINEAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 16.47
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FAlita99-8B-LINEAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.84
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FAlita99-8B-LINEAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.93
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FAlita99-8B-LINEAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.21
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FAlita99-8B-LINEAR
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [DreadPoor/LemonP_ALT-8B-Model_Stock](https://huggingface.co/DreadPoor/LemonP_ALT-8B-Model_Stock)
* [DreadPoor/Heart_Stolen-8B-Model_Stock](https://huggingface.co/DreadPoor/Heart_Stolen-8B-Model_Stock)
* [DreadPoor/Aspire_1.3-8B_model-stock](https://huggingface.co/DreadPoor/Aspire_1.3-8B_model-stock) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: DreadPoor/Aspire_1.3-8B_model-stock+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
weight: 1.0
- model: DreadPoor/LemonP_ALT-8B-Model_Stock
parameters:
weight: 1.0
- model: DreadPoor/Heart_Stolen-8B-Model_Stock
parameters:
weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/DreadPoor__Alita99-8B-LINEAR-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=DreadPoor%2FAlita99-8B-LINEAR&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 29.39|
|IFEval (0-Shot) | 71.90|
|BBH (3-Shot) | 35.01|
|MATH Lvl 5 (4-Shot)| 16.47|
|GPQA (0-shot) | 8.84|
|MuSR (0-shot) | 12.93|
|MMLU-PRO (5-shot) | 31.21|
|
vania-848/gemma-product-description | vania-848 | "2025-03-18T09:56:51Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | "2025-03-18T07:52:59Z" | ---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-product-description
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-product-description
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vania-848/gemma-product-description", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0.dev0
- Pytorch: 2.1.0
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
paicup09/poca-SoccerTwos | paicup09 | "2023-02-20T20:13:59Z" | 13 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-02-20T20:13:40Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: paicup09/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
anzeo/prompt_fine_tuned_CB_sloberta | anzeo | "2024-05-22T11:33:05Z" | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:EMBEDDIA/sloberta",
"base_model:adapter:EMBEDDIA/sloberta",
"license:cc-by-sa-4.0",
"region:us"
] | null | "2024-05-22T11:04:11Z" | ---
license: cc-by-sa-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: EMBEDDIA/sloberta
metrics:
- accuracy
- f1
model-index:
- name: prompt_fine_tuned_CB_sloberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prompt_fine_tuned_CB_sloberta
This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7179
- Accuracy: 0.3182
- F1: 0.1591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.8289 | 3.5714 | 50 | 1.9695 | 0.3182 | 0.1536 |
| 0.7265 | 7.1429 | 100 | 1.4997 | 0.3636 | 0.2273 |
| 0.6323 | 10.7143 | 150 | 1.4937 | 0.3636 | 0.2891 |
| 0.5933 | 14.2857 | 200 | 1.7106 | 0.2727 | 0.2290 |
| 0.5496 | 17.8571 | 250 | 1.3607 | 0.3636 | 0.2821 |
| 0.4356 | 21.4286 | 300 | 1.4913 | 0.3182 | 0.1536 |
| 0.3874 | 25.0 | 350 | 1.6290 | 0.2727 | 0.1527 |
| 0.3719 | 28.5714 | 400 | 1.7179 | 0.3182 | 0.1591 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.1.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Aguf/Aba_GLM | Aguf | "2024-10-18T08:34:14Z" | 7 | 1 | null | [
"safetensors",
"chatglm",
"custom_code",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | "2024-10-17T15:08:08Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
haripritam/SmolLM2-360M-VIT | haripritam | "2024-11-19T17:10:03Z" | 138 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-19T17:09:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thibaud/hermes-3-Llama-3.1-8b-v0 | thibaud | "2024-08-19T15:50:39Z" | 7 | 0 | null | [
"safetensors",
"llama",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | "2024-08-19T15:40:57Z" | ---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
base_model: meta-llama/Meta-Llama-3.1-8B
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: What is the meaning of life?
model-index:
- name: Hermes-3-Llama-3.1-70B
results: []
---
# Hermes 3 - Llama-3.1 8B

## Model Description
Hermes 3 is the latest version of our flagship Hermes series of LLMs by Nous Research.
For more details on new capabilities, training results, and more, see the [**Hermes 3 Technical Report**](https://nousresearch.com/wp-content/uploads/2024/08/Hermes-3-Technical-Report.pdf).
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.
The ethos of the Hermes series of models is focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.
The Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
# Benchmarks
Hermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two.
Full benchmark comparisons below:

# Prompt Format
Hermes 3 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are Hermes 3, a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 3, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 3."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Inference
Here is example code using HuggingFace Transformers to inference the model
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-3-Llama-3.1-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-3-Llama-3.1-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
You can also run this model with vLLM, by running the following in your terminal after `pip install vllm`
`vllm serve NousResearch/Hermes-3-Llama-3.1-8B`
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

## Quantized Versions:
GGUF Quants: https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-3-Llama-3.1-8B,
url={[https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B]https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)},
title={Hermes-3-Llama-3.1-8B},
author={"Teknium", "theemozilla", "Chen Guang", "interstellarninja", "karan4d", "huemin_art"}
}
``` |
souzat19/Llama3.1_fn14133.29122024 | souzat19 | "2024-12-31T03:30:52Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-30T02:59:17Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** souzat19
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gemasphi/f-real-setfit-ss-paraphrase-multilingual-mpnet-base-v2 | gemasphi | "2022-12-18T22:04:39Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-12-18T22:04:18Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2727 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2727,
"warmup_steps": 273,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
OwOpeepeepoopoo/herewegoagain31 | OwOpeepeepoopoo | "2024-05-04T17:23:57Z" | 89 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-04T17:22:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
llavallava/qwen2.5-3b-instruct-trl-sft-lora-gazefollow_xml_out_1280_int_maxpixels1003520_2epochs | llavallava | "2025-03-09T01:32:59Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-08T03:01:19Z" | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen2.5-3b-instruct-trl-sft-lora-gazefollow_xml_out_1280_int_maxpixels1003520_2epochs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-3b-instruct-trl-sft-lora-gazefollow_xml_out_1280_int_maxpixels1003520_2epochs
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="llavallava/qwen2.5-3b-instruct-trl-sft-lora-gazefollow_xml_out_1280_int_maxpixels1003520_2epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
himanshubeniwal/mt5-base-finetuned-kk-to-en-filthy-American | himanshubeniwal | "2024-04-27T19:31:28Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-27T19:24:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lcw99/llama-3-10b-it-ko-2024-0527 | lcw99 | "2024-06-01T16:48:53Z" | 2,255 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-27T03:16:36Z" | ---
language:
- ko
license: apache-2.0
library_name: transformers
---
# Model Card for Model ID
## Model Details
### Model Description
Korean layer added instruction tunning of meta-llama/Meta-Llama-3-8B-Instruct
#### Chat template
tokenizer.apply_chat_template(chat, tokenize=False)
|
grelam/ppo-LunarLander-v2 | grelam | "2023-11-28T13:50:48Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-28T13:50:21Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.39 +/- 17.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BoyaWu10/bunny-pretrain-phi-3-siglip | BoyaWu10 | "2024-05-08T12:19:47Z" | 43 | 0 | transformers | [
"transformers",
"bunny-phi3",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-05-08T12:19:17Z" | ---
inference: false
license: apache-2.0
---
# Model Card
Bunny is a family of lightweight multimodal models.
Bunny-pretrain-phi-3-siglip is the pretrained weights for [Bunny-v1.0-4B](https://huggingface.co/BAAI/Bunny-v1_0-4B), which leverages Phi-3-Mini-4K-Instruct as the language model backbone and SigLIP as the vision encoder.
It is pretrained on LAION-2M.
More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny).
# License
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses.
The content of this project itself is licensed under the Apache license 2.0.
|
lillybak/QA-Physics_QLoRA-mistral-7b-50epochs-v1 | lillybak | "2024-02-08T23:40:03Z" | 60 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-02-08T23:34:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mesolitica/llama-2b-hf-32768-fpf | mesolitica | "2023-10-02T13:42:12Z" | 176 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-23T08:52:50Z" | ---
language:
- ms
---
# Full Parameter Finetuning 2B 32768 context length Llama2 on Malaysian text
2B derived from first 5 layers 13B model.
README at https://github.com/mesolitica/malaya/tree/5.1/session/llama2#2b-32768-context-length-flash-attention-2
WandB, https://wandb.ai/mesolitica/fpf-Llama-2-2b-32k-hf |
Dortp58/deepseekr1-qwen-1.5B_grpo_imo | Dortp58 | "2025-03-24T11:50:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-24T11:47:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mantis-VL/mantis-8b-idefics2_8192 | Mantis-VL | "2024-10-20T08:56:41Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"idefics2",
"image-text-to-text",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-05-20T05:42:22Z" | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2_8192
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mantis-8b-idefics2_8192
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
dlby/testModel35 | dlby | "2024-05-24T07:16:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-24T07:16:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vmpsergio/9ddf4f7e-55a9-4ba7-be76-e7e0d3e48c1c | vmpsergio | "2025-01-10T23:29:45Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-10T23:22:55Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9ddf4f7e-55a9-4ba7-be76-e7e0d3e48c1c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cd63c00361dee9fe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cd63c00361dee9fe_train_data.json
type:
field_input: system
field_instruction: question
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: vmpsergio/9ddf4f7e-55a9-4ba7-be76-e7e0d3e48c1c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/cd63c00361dee9fe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8f367aa5-d8ed-4ba2-adbb-8ec4c3374bd9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8f367aa5-d8ed-4ba2-adbb-8ec4c3374bd9
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9ddf4f7e-55a9-4ba7-be76-e7e0d3e48c1c
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0019 | 8 | nan |
| 0.0 | 0.0038 | 16 | nan |
| 0.0 | 0.0056 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Bedru/fine_tuned_whisper_amharic_hausa | Bedru | "2025-02-24T10:49:13Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-02-24T09:52:58Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: fine_tuned_whisper_amharic_hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_whisper_amharic_hausa
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3823
- Wer: 93.6131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 3.1969 | 0.1244 | 25 | 3.1549 | 231.7939 |
| 2.3785 | 0.2488 | 50 | 2.1720 | 108.4643 |
| 1.7437 | 0.3731 | 75 | 1.6113 | 98.6384 |
| 1.4651 | 0.4975 | 100 | 1.3823 | 93.6131 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
minhtuan7akp/snowflake-m-v2.0-vietnamese-finetune_v2 | minhtuan7akp | "2025-03-20T01:02:54Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"gte",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:21892",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-m-v2.0",
"base_model:finetune:Snowflake/snowflake-arctic-embed-m-v2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-20T01:00:55Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:21892
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m-v2.0
widget:
- source_sentence: Tổng thống Roosevelt có quan điểm gì về việc phục hồi lại Đông
Dương cho Pháp?
sentences:
- "3. Hãy vẽ sơ đồ cơ cấu ngành công nghiệp chế biến nông, lâm,\r\nthuỷ sản. Nêu\
\ một ví dụ thể hiện rõ mối quan hệ giữa vùng nguyên\r\nliệu với các cơ sở công\
\ nghiệp chế biến.\r\nVấn đề phát triển công nghiệp\r\nsản xuất hàng tiêu dùng\r\
\n\r\nl. Công nghiệp dệt, may\r\n\r\na) Công nghiệp dệt\r\n\r\n— Dệt là một nghề\
\ truyền thống có từ lâu đời ở nước ta. Tuy nhiên, việc\r\nhình thành ngành công\
\ nghiệp dệt có thể được coi là từ khi nhà máy dệt\r\nNam Định ra đời.\r\nNgành\
\ dệt của nước ta được phát triển dựa trên cơ sở các thế mạnh về\r\nnguồn lao\
\ động và thị trường tiêu thụ. Nguồn nguyên liệu có thể khai\r\nthác từ nông nghiệp\
\ (bông, đay, lanh, tơ tằm...) hoặc từ công nghiệp hoá\r\nhọc (sợi hoá học).\r\
\nHãy nêu các vùng trồng bông lớn của nước ta và vai trò của chúng đối với việc\r\
\nphát triển công nghiệp dệt.\r\n\r\nSợi (nghìn tấn) Vải lụa (triệu m?)\r\n\r\n\
1990 1995 2000 2005 Năm\r\n\r\n[1 sợi ———®——— Vải lụa\r\nHình 37.1. Tình hình\
\ sản xuất sợi và vải lụa\r\n\r\nHãy nhận xét về tốc độ tăng trưởng một số sản\
\ phẩm của công nghiệp dệt qua\r\nbiểu đồ trên.\r\n\r\n155\r\n— Công nghiệp dệt\
\ ở nước ta đã trải qua nhiều thăng trầm. Trong những\r\n\r\nnăm gần đây, những\
\ khó khăn về thị trường và đổi mới trang thiết bị,\r\nnguyên liệu đã ảnh hưởng\
\ nhiều đến việc phát triển của ngành công\r\nnghiệp này."
- "khí, quân trang, quân dụng cho người Pháp đang chạy trốn. Sau \r\nnày, Sennô\
\ đã viết về mệnh lệnh này như sau: \"Chính phủ Mỹ quan\r\ntâm tới việc thay người\
\ Pháp buộc phải rời khỏi Đông Dương để\r\nsau chiến tranh, để sau vấn để tách\
\ thuộc địa của họ ra sẽ de\r\ndàng hom\"1.\r\nNhưng kế hoạch của Rudơven không\
\ thực hiện được vì sự chống \r\nđổi cùa Anh, nước có nhiều thuộc địa nhất lúc\
\ đó. Anh lo ngại Mỹ \r\nsê lợi dụng chế độ \"ủy thác quốc tế\" để cai quản luôn\
\ cà các thuộc \r\nđịa với sự bảo hộ cùa Anh (trên thực tế, sau này, có một số\
\ thuộc \r\nđịa do quốc tế thực hiện chế độ ủy thác, đã trở thành khu vực chịu\
\ \r\nảnh hưởng của Mỹ). Thủ tướng Anh lúc đó là Sớcsin đã kiên quyết \r\nđòi\
\ cho Pháp phải được trở lại Đông Dương trước khi Nhật bị đánh \r\nbại. Sau năm\
\ 1945, do các mối quan hệ giữa Mỹ và Anh, Pháp, và \r\ndo quyền lợi của mình,\
\ Mỹ đã dần dần ủng hộ việc Pháp quay trở \r\nlại Đông Dương.\r\nTừ mùa thu năm\
\ 1945, thoạt đầu, chính quyền Tơruman quyết \r\nđịnh không đưa vấn đề Đông Dương\
\ vào danh sách những vấn đề \r\nphải giải quyết ở Viễn Đông. Sau đó, các quan\
\ chức Mỹ ở Đông Dương\r\n1. G.Potơ, Việt Nam - lịch sứ qua các tư liệu, Niu American\
\ Librêri, Luân Đôn, \r\n1981. tr.6. Dần theo: Viện Qup.n hệ Quốc tế, Bộ Ngoại\
\ giao, Chù tịch\r\nHồ Chí Minh với công tác ngoại giao, Nxb. Sự thật, Hà Nội,\
\ 1990, tr.47.\r\n2. Theo L.A.Patti, Why Vietnam, California, Press Lot Angiolet,\
\ 1980, p.97.\r\n154"
- "mất đầu, lúng túng trước những biến động mau lẹ của tình hình.\r\nv ề phía Pháp,\
\ phái De Gaulle muốn tái kiểm soát Đông Dương \r\nnên đã lập ra một đạo quân\
\ viễn chinh để tham gia vào các cuộc \r\nhành quân của quân Đồng minh tại chiến\
\ trường Đông Dương. \r\nTrước đó, ngày 24/3/1945, De Gaulle đã ra tuyên bố về\
\ tương lai \r\ncủa Đông Dương hậu chiến. Ngày 7/6/1945, De Gaulle chính thức\
\ \r\ngiao trách nhiệm cho tướng Philippe Leclerc thành lập và chỉ huy \r\nđạo\
\ quân viễn chinh Pháp gồm hai sư đoàn sẵn sàng sang Đông \r\nD ương’. Ngày 15/8/1945,\
\ De Gaulle lại bổ nhiệm Đô đốc Géorge \r\nThiery d'Argenlieu làm Cao ủy Đông\
\ Dương, kiêm chức Tổng Tư \r\nlệnh hải lục không quân Pháp ở Viễn Đông, có trách\
\ nhiệm soạn \r\nthảo phương án đưa quân tái chiếm Việt Nam, Lào, Cao Miên2. \r\
\nĐồng thời, Chính phủ Pháp cũng ra lệnh cho các đom vị quân đội \r\nđang có mặt\
\ ờ Trung Quốc và Ản Độ quay trở lại Đông Dương \r\nsớm nhất có thể.\r\nĐối với\
\ Mỹ, Tổng thống Roosevelt từ đầu đã có ý định \"ngăn\r\ncản không cho nước Pháp\
\ phục hồi lại Đông Dương\" và \"mong\r\nmuốn một sự ủy trị cùa Liên hợp quốc\
\ cho đến khi những người\r\ndân có khả năng tự cai quán lấy mình\"3.\r\nNhưng\
\ sau khi Roosevelt qua đời, người kế nhiệm là Harry s. \r\nTruman lại thi hành\
\ một dường lối mới nhàm thay đổi lập trường \r\nvề vấn đề Đông Dương4. Trước\
\ sự lớn mạnh của Liên Xô, sức \r\nmạnh của phong trào dân chủ, sự phát triển\
\ của phong trào giành \r\nđộc lập dân tộc ở các nước thuộc địa và phụ thuộc cũng\
\ như của"
- source_sentence: Quá trình thâm nhập và phát triển của Nho giáo ở Việt Nam diễn
ra như thế nào trong giai đoạn chống Bắc thuộc và sau đó?
sentences:
- "LỊCH SỪ VIỆT NAM - TẬP 11\r\ngiầu mạnh hcm nhờ chiến tranh. Những nước bại trận\
\ như Đức, Ý, \r\nNhật thì kiệt quệ. Song dù thắng hay bại, sự kết thúc chiến\
\ tranh đặt \r\ncho mỗi nước những yêu cầu cấp bách cần giải quyết, tạo nên \r\
\nnhững đặc trưng kinh tế - xã hội ở nhóm nước này.\r\nSau chiến tranh thế giới,\
\ những nưóc công nghiệp chủ nghĩa \r\nchâu Âu và Nhật Bản đều bị chiến tranh\
\ tàn phá nặng nề. Nhiệm vụ \r\nquan trọng của họ ỉà hàn gắn vết thương chiến\
\ tranh, khôi phục \r\nkinh tế, ổn định đời sống xã hội. Đối với Mỹ, nhiệm vụ\
\ chủ yếu là \r\nphải chuyển hướng vận hành kinh tế từ một nền kinh tế phục vụ\
\ \r\nquân sự thời chiến sang nền kinh tế thời bình.\r\nNhừng nét cơ bản của tình\
\ hình thế giới nêu trên đã tác động \r\nđến hầu hết các khu vực trên thế giới,\
\ đặc biệt là khu vực Châu Á \r\nvà Đông Nam Á, tạo điều kiện thuận lợi cho cuộc\
\ đấu tranh giải \r\nphóng của các dân tộc Đông Dương. Từ đầu những năm 1950,\
\ tình \r\nhình cách mạng ba nước Đông Dương chuyển biến nhanh chóng. \r\nVới\
\ cuộc đi thăm Trung Quốc, Liên Xô của Chủ tịch Hồ Chí Minh \r\nđầu năm 1950 và\
\ việc các nước xã hội chủ nghĩa công nhận và đặt \r\nquan hệ ngoại giao với Chính\
\ phủ Việt Nam Dân chủ Cộng hòa là \r\nmột thắng lợi ngoại giao vô cùng quan trọng.\
\ Thắng lợi về ngoại \r\ngiao này đã chấm dứt thời kỳ chiến đấu đom độc, hầu như\
\ bị cách ly \r\nvới bên ngoài và từ đó tiếp nhận được sự đồng tình về chính trị\
\ và \r\nsự viện trợ về vật chất.\r\nVới sự giúp đỡ của Liên Xô, Trung Quốc và\
\ các nước xã hội"
- "(551 TCN - 479 TCN), sống ở thời Xuân Thu. về sau Mạnh Tử \r\nthời Chiến Quốc\
\ và Đổng Trọng Thư thời Tây Hán đã phát triển \r\nthêm học thuyết này làm cho\
\ nó ngày càng hoàn chinh.\r\nKinh điển Nho gia của Trung Quốc bao gồm Ngũ kinh\
\ (Thi, Thư,\r\nDịch, Le, Xuân Ihu) và Từ thư {Luận ngữ, Đại học, Trung dung,\r\
\nMạnh Tử). Từ thời Hán trở về sau, Nho giáo đã được coi là hệ tư \r\ntường chính\
\ thống của giai cấp thống trị ở Trung Hoa. Nho giáo \r\nchủ tnrơng: \"tôn quân,\
\ đại thống\" với ba rường mối \"tam cương\" cơ \r\nbản là đạo: \"vua - tôi\"\
, \" cha - con\", \" vợ - chồng\". Cùng với \"tam \r\ncương\" là \"ngũ thường\"\
\ tức năm phép ứng xử luân lý và đạo đức là: \r\nnhân, nghĩa, lễ, trí, tín, về\
\ đường lối chính trị là tu - tề - trị - bình \r\n(tu thân, tề gia, trị quốc,\
\ bình thiên hạ).\r\nNgay từ thời Tây Hán, trong chừng mực nhất định, Nho giáo\
\ đã \r\nbắt đầu thâm nhập vào xã hội Việt cổ để làm công cụ nô dịch nhân \r\n\
dân ta về tư tưởng và tinh thần. Nho giáo du nhập vào Việt Nam thời \r\nBắc thuộc,\
\ không còn là Nho giáo nguyên sơ nữa mà là Nho giáo \r\nđược Hán Nho trước nhất\
\ là Đổng Trọng Thư cải tạo cho thích hợp\r\n257\r\nLỊCH SỬ VIỆT NAM - TẬP 1\r\
\nvới chế độ phong kiến trung ương tập quyền. Nho giáo ở thời kỳ\r\nnày, ban đầu\
\ có lẽ chỉ dành cho đổi tượng là người Hán ở Giao Châu,\r\nchứ không được phổ\
\ biến rộng rãi. Dần dần do cần sự trợ giúp của\r\ncác quý tộc Lạc Việt, cũng\
\ nằm trong ý đồ đồng hóa dân tộc, Nho\r\ngiáo đã được chính quyền chủ động truyền\
\ bá ở phạm vi hẹp. Sách"
- "giới hạn trong khuôn khổ những gì có lợi cho vương quyền. Kinh Thi được giải\r\
\nthích theo lối cao quý, tao nhã; nam nữ bị ngăn cách (nam nữ thụ thụ bất thân);\r\
\ncon hát bị khinh rẻ là “xướng ca vô loại”.\r\nNhư vậy, Nho giáo theo quan niệm\
\ của Khổng - Mạnh đúng là đã không\r\ncòn nữa, thay vào đó đã là một thứ Nho\
\ giáo khác. Để phục vụ việc cai trị trong\r\n194\r\nDownloaded by Tu?n ?ào Minh\
\ ([email protected])\r\nlOMoARcPSD|49704028\r\nphạm vi quốc gia, bên\
\ cạnh cái “nhân” để lấy lòng dân, cần phải tăng liều lượng\r\nchất pháp luật\
\ của văn minh du mục. Nhiệm vụ này Hán Nho đã thực hiện một\r\ncách xuất sắc.\
\ Do vậy mà Nho giáo mới này được nhà Hán và các triều đại về\r\nsau ra sức đề\
\ cao.\r\n3.3. Quá trình thâm nhập, phát triển và những đặc điểm của Nho giáo\
\ Việt\r\nNam\r\n3.3.1. Hán Nho đã được các quan lại Trung Hoa như Tích Quang,\
\ Nhâm\r\nDiên, Sỹ Nhiếp ra sức truyền bá từ đầu Công nguyên. Tuy nhiên, vì đây\
\ là thứ\r\nvăn hóa do kẻ xâm lược áp đặt cho nên, suốt cả giai đoạn chống Bắc\
\ thuộc, Nho\r\ngiáo chưa có chỗ đứng trong xã hội Việt Nam. Đến năm 1070, với\
\ sự kiện Lí\r\nThánh Tông cho lập Văn Miếu thờ Chu Công, Khổng Tử, mới có thể\
\ xem là\r\nNho giáo được tiếp nhận thính thức. Chính vì vậy mà Nho giáo ở Việt\
\ Nam chủ\r\nyếu là Tống Nho chứ không phải Hán Nho, Đường Nho hay Minh Nho, Thanh\r\
\nNho.\r\nĐời Trần có Chu Văn An đào tạo được khá đông học trò. Các nhà nho\r\n\
những lớp đầu tiên này ra sức bài xích Phật giáo để khẳng định chỗ đứng của"
- source_sentence: Hưng Đạo Vương đã nói gì về Triệu Võ Vương?
sentences:
- "61 Chỗ này Hưng Đạo Vương nói lầm: Triệu Võ Vương đóng đô ở Phiên\r\nNgung tức\
\ là ở gần thành Quảng Châu bây giờ, mà châu Khâm, châu Liêm\r\nthì ở giáp giới\
\ nước ta. Lẽ nào đi đánh Trường Sa ở tỉnh Hồ Nam mà lại\r\nquay trở lại châu\
\ Khiêm, châu Liêm. Chắc hẳn người mình ngày trước không\r\nthuộc địa đồ và có\
\ lẽ rằng Hưng Đạo Vương lúc nói chuyện đó cũng tưởng\r\nlà kinh đô của Triệu\
\ Võ Vương ở đâu bên nước ta bây giờ, cho nên mới nói\r\nnhư thế chăng?"
- "phải một lần cống. Mà lệ cống thì phải cho nho-sĩ, thầy thuốc, thầy bói, thầy\r\
\ntoán-số và thợ-thuyền mổi hạng ba người, cùng với các sản-vật như là sừng\r\n\
tê, ngà voi, đồi-mồi, châu-báu và các vật lạ. Vua Mông-cổ lại đặt quan Đạt\x02\
lỗ-hoa-xích (tiếng Mông-cổ tức là quan Chưởng-ấn), để đi lại giám-trị các\r\n\
châu-quận nước Nam.\r\nÝ Mông-cổ muốn biết nhân-vật tài-sản nước Nam ta có những\
\ gì, và học-vấn\r\nxảo-kỹ ra sao, để liệu đường mà đánh lấy cho tiện. Và tuy\
\ rằng quyền chính\x02trị vẫn để cho vua nước Nam, nhưng đặt quan giám-trị để\
\ dần dần lập thành\r\nBảo-hộ.\r\nThánh-tông bề ngoài tuy vẫn chịu thần-phục,\
\ nhưng trong bụng cũng biết\r\nrằng Mông-cổ có ý đánh lấy nước mình, cho nên\
\ cứ tu binh dụng võ để\r\nphòng có ngày tranh chiến. Vậy tuyển đinh-tráng các\
\ lộ làm lính, phân quân\x02ngũ ra làm quân và đô: mỗi quân có 30 đô, mỗi đô có\
\ 80 người, bắt phải\r\nluyện tập luôn.\r\nNăm bính-dần (1266) nhân sứ Mông-cổ\
\ sang, Thánh-tông sai quan sang Tàu\r\nđáp lễ và xin đừng bắt nho-sĩ, thầy bói,\
\ và các thợ-thuyền sang cống. Vua\r\nMông-cổ ưng cho, nhưng bắt chịu 6 điều khác:\r\
\n1. Vua phải thân vào chầu\r\n2. Vua phải cho con hay là em sang làm tin\r\n\
3. Biên sổ dân sang nộp\r\n4. Phải chịu việc binh-dịch\r\n5. Phải nộp thuế-má\r\
\nVua An-nam cứ lần-lữa không chịu. Đến năm tân-mùi (1271) vua Mông-cổ\r\nlà Hốt-tất-liệt\
\ cải quốc-hiệu là Đại-nguyên, rồi cho sứ sang dụ Thánh-tông\r\nsang chầu nhưng\
\ Thánh-tông cáo bệnh không đi.\r\nNăm sau Nguyên-chủ cho sứ sang tìm cột đồng-trụ\
\ của Mã Viện trồng ngày"
- "Hưng Đạo Vương là một danh tướng đệ nhất nước Nam, đánh giặc Nguyên\r\ncó công\
\ to với nước, được phong làm Thái Sư, Thượng Phụ, Thượng Quốc\r\nCông, Bình Bắc\
\ Đại Nguyên Súy, Hưng Đạo Đại Vương. Vua lại sai người\r\nvề Vạn Kiếp làm sinh\
\ từ để thờ ngài ở chỗ dinh cũ của ngày đóng ngày\r\ntrước.\r\nHưng Đạo Vương\
\ làm quan đến đời vua Anh Tông thì xin về trí sĩ ở Vạn\r\nKiếp. Khi ngài sắp\
\ mất, vua Anh Tông có ngự giá đến thăm, nhận thấy ngài\r\nbệnh nặng, mới hỏi\
\ rằng: \" Thượng phụ một mai khuất núi, phỏng có quân\r\nbắc lại sang thì làm\
\ thế nào? \"\r\nHưng Đạo Vương tâu rằng: \" Nước ta thuở xưa, Triệu Võ Vương\
\ dựng\r\nnghiệp, Hán Đế đem binh đến đánh, Võ Vương sai dân đốt sạch đồng áng,\r\
\nkhông để lương thảo cho giặc chiếm được, rồi đem đại quân sang châu\r\nKhâm,\
\ châu Liêm đánh quận Tràng Sa 61 dùng đoản binh mà đánh được, đó\r\nlà một thời.\
\ Đến đời Đinh, Lê, nhiều người hiền lương giúp đỡ, bấy giờ nước\r\nNam đang cường,\
\ vua tôi đồng lòng, bụng dân phấn chấn; mà bên Tàu đang\r\nlúc suy nhược, cho\
\ nên ta đắp thành Bình Lỗ (thuộc Thái Nguyên) phá được\r\nquân nhà Tống, đó là\
\ một thời. Đến đời nhà Lý, quân Tống sang xâm, Lý Đế\r\nsai Lý Thường Kiệt đánh\
\ mặt Khâm, Liêm, dồn đến Mai Lĩnh, quân hùng,\r\ntướng dũng, đó là có thế đánh\
\ được. Kế đến bản triều, giặc Nguyên kéo đến\r\nvây bọc bốn mặt, may được vua\
\ tôi đồng lòng, anh em hòa mục, cả nước đấu\r\nsức lại mà đánh, mới bắt được\
\ tướng kia, cũng là lòng trời giúp ta mới được\r\nthế.\r\nĐại để, kẻ kia cậy\
\ có tràng trận, mà ta thì cậy có đoản binh; lấy đoản chống"
- source_sentence: Số thuốc phiện sản xuất được ở Đông Dương tăng lên bao nhiêu lần
vào năm 1941 so với năm 1940?
sentences:
- "Tổng thống chính quyền Sài Gòn năm 1971, và cho Níchxơn trong \r\ncuộc tranh\
\ cử Tổng thống Mỹ năm 1972. Với những tham vọng đó, \r\ntập đoàn Níchxơn coi\
\ cuộc hành quân này có ý nghĩa quyết định \r\nthắng lợi của Mỹ và quân đội Sài\
\ Gòn, tạo ra \"bước ngoặt thật sự\" \r\ncho cuộc chiến tranh xâm lược của chúng\
\ đang bế tắc ở Việt Nam.\r\nNgày 2-2-1971, Tổng thống Mỹ cùng cố vấn Kítxinhgiơ,\
\ Ngoại \r\ntrường Râugiơ, Bộ trường Quốc phòng Leđơ và Tham mưu trường \r\nliên\
\ quân Mơrơ nhóm họp thông qua kế hoạch mờ 3 cuộc hành \r\nquân: \"Lam Sơn 719\"\
\ đánh ra Đường 9 - Nam Lào, \"Toàn thắng 1-71\"\r\nđánh vào Đông Bắc Campuchia,\
\ \"Quang Trung 4\" đánh ra vùng ba\r\n318\r\nChương IV. Đánh thắng một bước quan\
\ trọng..,\r\nbiên giới ở Tây Nguyên, lấy Đường 9 - Nam Lào làm hướng tấn \r\n\
công chủ yếu, đồng thời dự kiến kế hoạch đối phó với dư luận \r\ntrong nước và\
\ thế giới. Ngày 3-2-1971, Níchxơn quyết định chính \r\nthức kế hoạch đánh sang\
\ Lào và cam kết với Nguyễn Văn Thiệu sẽ \r\ndùng không quân Mỹ yểm trợ cho quân\
\ đội Sài Gòn tấn công.\r\nĐe thực hiện cuộc hành quân \"Lam Sơn 719\", Mỹ đã\
\ huy động \r\nmột lực lượng rất lớn quân đội, bao gồm tất cả lực lượng dự bị\
\ \r\nchiến lược, lực lượng cơ động cùa Quân khu I, các lực lượng tiếp \r\nvận\
\ trung ương với sự chi viện cao của hỏa lực và không quân Mỹ. \r\nMột bộ phận\
\ thiết giáp và bộ binh Mỹ phối hợp tác chiến và hỗ trợ \r\nphía sau và một lực\
\ lượng bộ binh quân đội Viêng Chăn gồm GM30 \r\nvà GM33 phối hợp phía Tây Đường\
\ 9. Tổng số quân địch được huy"
- "và sản xuất 67.400 tấn nguyên liệu3. Tức là đã có 68.000ha các loại \r\ncây này\
\ được trồng thêm, trong đó lạc và thầu dầu là hai thứ tăng \r\nnhanh nhất. Lạc\
\ tăng từ 14.000ha và sản lượng 15.600 tấn năm\r\n1938 lên 49.100ha và sản lượng\
\ 32.300 tấn năm 1942; rồi 48.000ha \r\nvà 29.000 tấn năm 1943 và 46.300ha và\
\ 24.900 tấn vào năm 1944. \r\nThầu dầu tăng từ 3.000ha và 2.400 tấn nãm 1938\
\ lên lO.OOOha và\r\n3.000 tấn năm 1942, rồi 19.700ha và 4.700 tấn năm 1943 và\
\ 30.600ha \r\nvà 12.000 tấn năm 19444.\r\nSố lượng các chất dầu săn xuất tăng\
\ lên nhưng không đáp ứng \r\nđược nhu cầu, nhất là nhu cầu về dầu thắp sáng của\
\ nhân dân.\r\n- Công nghiệp dệt\r\nTrước chiến tranh, Đông Dương nhập trung bình\
\ mỗi năm 23.000 \r\ntấn bông, trong đó 16.000 tấn để các nhà máy Pháp kéo thành\
\ sợi\r\n1. BEI, 1943, F II, Trần Huy Liệu..., Tài liệu tham khảo lịch sử cách\
\ mạng\r\ncận đại Việt Nam, tập 8, Sđd, tr. 123.\r\n2. Témoignages..., Sđd, tr.\
\ 2-8.\r\n3. 4. Annuaire statistique de l' Union franqaise Outre mer 1939-1946,\r\
\ntr. F-76.\r\n538\r\nChương V. Xã hội Việt Nam dưới ách thống trị..\r\n(14.000\
\ tấn) cung cấp cho nhà máy dệt cùa Pháp khoảng 4.000 tấn, \r\nsố còn lại được\
\ phân phát cho các khung dệt thủ công. Đẩu chiến \r\ntranh, Đông Dương vẫn còn\
\ nhập được bông để kéo ra thành \r\n10.400 tấn sợi, nhưng năm 1942 chỉ còn nhập\
\ được số bông để kéo \r\nra được 1.507 tấn'. Đe bù vào số thiếu hụt cho sản xuất\
\ của nhà \r\nmáy, Pháp tăng cường việc bắt dân ta nhổ lúa để trồng bông và \r\
\ntrồng các thứ cây lấy sợi dùng cho dệt vải may mặc, hay làm bao"
- "hóa; thuế độc quyền; thué điền thổ...\r\nTheo những con số thống kê chính thức\
\ thì các loại thuế trên \r\nđều tăng lên đáng kể, khoảng từ ba đến hơn ba lần\
\ vào năm 1945 \r\n(số dự thu) so với năm 1939 (số thực thu) như sau:\r\nBảng\
\ 29: Thu nhập từ một sổ loại thuế ở Đông Dương \r\ntrong các năm 1939 và 19453\r\
\nĐom vị: nghìn đồng\r\nThuế 1939 1945\r\nThuế tiêu thụ và vận chuyển hàng hoá\
\ 20.655.000 58.265.000\r\nThuế muối, rượu, thuốc phiện, diêm, pháo,\r\nthuốc\
\ lá\r\n24.694.000 87.000.000\r\nThuế điền thổ, trước bạ 11.821.000 28.625.000\r\
\nvề thuốc phiện, do việc nhập khẩu bị ngừng, Pháp khuyến khích \r\nnhân dân thượng\
\ du trồng loại cây này nên số thuốc phiện sản xuất \r\nđược ngày một tăng: năm\
\ 1940: 7.560kg; nãm 1941: 17.344kg; năm\r\n1. Annuaire statistique de V Union\
\ f,rariỊaise Outre- mer 1939-1946, tr. K -\r\n90-93.\r\n2, 3. Annuaire statistique\
\ de runion firanẹaise Outre - mer 1939-1946, tr.\r\nK-90.\r\n552"
- source_sentence: Cái nhàn trong thơ Mạc Thiên Tứ khác với cái nhàn trong thơ của
các nho sĩ Đàng Ngoài như thế nào?
sentences:
- "Cá rồng mệt mất, kình ngao lộn tròng.\r\nOng với ve om sòm cụm liễu,\r\nBướm\
\ dập dìu lẽo đẽo chòm hoa.\r\nMây lien ba khóm một tòa,\r\nDưới trên biếc mát,\
\ gần xa xanh rè.\r\nCách bên khe tiếng ngư ra rả,\r\nGõ mạn thuyền ca họa đòi\
\ cung.\r\nDưới rùng mầy trẻ mục đồng,\r\nLưng trâu thổi địch, gió lằng theo khe.\r\
\nMạc Thiên Tứ còn coi đất Hà Tiên giàu đẹp của ông hơn cả \r\nchốn thị thành:\r\
\nTuy giang thôn nào khác Tràng an.\r\nĐâu no thỉ đó là an lạc,\r\nLựa phải chen\
\ chân chon thị thành.\r\nNgười thanh nhàn, vật thì long thạnh (thịnh),\r\nDân\
\ bang kỳ ỳ sánh kỳ tây.\r\nTư tưởng chủ đạo trong Hà Tiên thập cảnh là tư tưởng\
\ an nhàn. \r\nCái nhàn trong thơ Mạc Thiên Tứ khác với cái nhàn ưong thơ của\
\ \r\ncác nho sĩ Đàng Ngoài là nhàn ẩn sĩ, chán cảnh quan trường, còn cố \r\n\
cái nhàn của Mạc Thiên Tứ xuất phát từ cuộc sống lao động thanh \r\nbình, ấm no.\
\ ông đã thi vị hóa cuộc sống của người dân nơi đây:\r\nCành lành như đợi người\
\ lành,\r\nMua nhàn một khắc giá đành nghìn cân...\r\n546"
- "434\r\nChương VII. Làng xã - Đon vị cơ sở thời Bắc thuộc\r\nGiở lại những trang\
\ lịch sử lập làng cho thấy ban đầu làng được \r\ntạo dựng bởi một số thành viên\
\ cùng nhau khai canh lập ấp. Trong \r\nsố đó có thể là người cùng một họ; hoặc\
\ thuộc nhiều họ khác nhau. \r\nMột số làng đã lấy tên gọi cùa họ - tộc đầu tiên\
\ có công mở làng để \r\nđặt tên cho làng. Chẳng hạn Đặng Xá - làng của họ Đặng,\
\ tên Nôm \r\nlà làng Đang (xã Cương Chính); Mai Xá - làng của họ Mai (xã \r\n\
Minh Phượng, huyện Tiên Lữ, Hung Yên); Phạm Xá - làng của họ \r\nPhạm (xã Minh\
\ Tiến, huyện Phủ Cừ, Hưng Yên); Lê Xá - làng họ \r\nLê (Duy Tiên, Hà Nam); hoặc\
\ Dương Xá - làng họ Dương, tên \r\nNôm là làng Giàng... Ờ những làng như thế\
\ ban đầu chỉ gồm thành \r\nviên của dòng họ đến khai canh mở làng. Từ một họ\
\ sinh ra nhiều \r\nchi, phái, nhân số dòng họ ngày càng đông. Sau thêm một số\
\ họ \r\nkhác đến cùng nhau mở rộng làng ấp, đồng ruộng.\r\nSau một quá trinh\
\ khai hoang lập ấp, mỗi một dòng họ đều sở \r\nhữu một số diện tích ruộng đất\
\ nhất định, số diện tích đó thuộc \r\nruộng đất công của dòng họ. Có thể một\
\ số ruộng đất khai phá được \r\nđã trở thành tư hữu. Ruộng đất công đem chia\
\ cho các thành viên \r\ntrong họ làm nhà ở (thổ cư), ruộng để cày cấy lấy hoa\
\ lợi sinh sống \r\n(thổ canh). Cho đến nay, tư liệu không cho biết cụ thể cách\
\ thức \r\nphân phối ruộng đất trong các dòng họ cụ thể ra sao? Nhưng căn cứ \r\
\nv à o đ iể u tr a h ô i c ô d â n tộ c h ọ c m ộ t sô lâ n g ờ đ ổ n g b ă n\
\ g s ô n g H ổ n g"
- "LỊCH SỪ VIỆT NAM - TẬP 4\r\nChia nhau lớn bé thấp cao\r\nCá rồng mệt mất, kình\
\ ngao lộn tròng.\r\nOng với ve om sòm cụm liễu,\r\nBướm dập dìu lẽo đẽo chòm\
\ hoa.\r\nMây lien ba khóm một tòa,\r\nDưới trên biếc mát, gần xa xanh rè.\r\n\
Cách bên khe tiếng ngư ra rả,\r\nGõ mạn thuyền ca họa đòi cung.\r\nDưới rùng mầy\
\ trẻ mục đồng,\r\nLưng trâu thổi địch, gió lằng theo khe.\r\nMạc Thiên Tứ còn\
\ coi đất Hà Tiên giàu đẹp của ông hơn cả \r\nchốn thị thành:\r\nTuy giang thôn\
\ nào khác Tràng an.\r\nĐâu no thỉ đó là an lạc,\r\nLựa phải chen chân chon thị\
\ thành.\r\nNgười thanh nhàn, vật thì long thạnh (thịnh),\r\nDân bang kỳ ỳ sánh\
\ kỳ tây.\r\nTư tưởng chủ đạo trong Hà Tiên thập cảnh là tư tưởng an nhàn. \r\n\
Cái nhàn trong thơ Mạc Thiên Tứ khác với cái nhàn ưong thơ của \r\ncác nho sĩ\
\ Đàng Ngoài là nhàn ẩn sĩ, chán cảnh quan trường, còn cố \r\ncái nhàn của Mạc\
\ Thiên Tứ xuất phát từ cuộc sống lao động thanh \r\nbình, ấm no. ông đã thi vị\
\ hóa cuộc sống của người dân nơi đây:\r\nCành lành như đợi người lành,\r\nMua\
\ nhàn một khắc giá đành nghìn cân...\r\n546\r\nChối ai lợi chuốc danh mua,\r\n\
Vui nghề chài lưới tôm cua tháng ngày.\r\nKé gió mưa, người thi non nước,\r\n\
Hai phía đều chiếm được thu thanh.\r\nTùng kia mấy bức xanh tươi,\r\nĐòi thanh\
\ đòi nhã, một nơi một nhàn.\r\nMạc Thiên Tứ từ khi đem đất đai quy phụ, đặt dưới\
\ quyền quản \r\nlý của chúa Nguyễn, thì cả dòng họ ông cùng ưật tự xã hội miền\
\ Hà \r\nTiên dưới quyền khai thác của ông, đã ưở nên vững vàng và phồn \r\nthịnh.\
\ Điều đó càng tạo cho ông nhiều lạc quan ữong cuộc sống và"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Snowflake/snowflake arctic embed m v2.0
type: Snowflake/snowflake-arctic-embed-m-v2.0
metrics:
- type: cosine_accuracy@1
value: 0.4420091324200913
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.678082191780822
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7401826484018265
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8054794520547945
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4420091324200913
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22602739726027396
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1480365296803653
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08054794520547943
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4420091324200913
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.678082191780822
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7401826484018265
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8054794520547945
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6282939655666836
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5709400594332107
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5768265728384441
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m-v2.0
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) <!-- at revision 0d1661ceed1cb456c85726749d5be61ebb30d4f1 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: GteModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("minhtuan7akp/snowflake-m-v2.0-vietnamese-finetune_v2")
# Run inference
sentences = [
'Cái nhàn trong thơ Mạc Thiên Tứ khác với cái nhàn trong thơ của các nho sĩ Đàng Ngoài như thế nào?',
'Cá rồng mệt mất, kình ngao lộn tròng.\r\nOng với ve om sòm cụm liễu,\r\nBướm dập dìu lẽo đẽo chòm hoa.\r\nMây lien ba khóm một tòa,\r\nDưới trên biếc mát, gần xa xanh rè.\r\nCách bên khe tiếng ngư ra rả,\r\nGõ mạn thuyền ca họa đòi cung.\r\nDưới rùng mầy trẻ mục đồng,\r\nLưng trâu thổi địch, gió lằng theo khe.\r\nMạc Thiên Tứ còn coi đất Hà Tiên giàu đẹp của ông hơn cả \r\nchốn thị thành:\r\nTuy giang thôn nào khác Tràng an.\r\nĐâu no thỉ đó là an lạc,\r\nLựa phải chen chân chon thị thành.\r\nNgười thanh nhàn, vật thì long thạnh (thịnh),\r\nDân bang kỳ ỳ sánh kỳ tây.\r\nTư tưởng chủ đạo trong Hà Tiên thập cảnh là tư tưởng an nhàn. \r\nCái nhàn trong thơ Mạc Thiên Tứ khác với cái nhàn ưong thơ của \r\ncác nho sĩ Đàng Ngoài là nhàn ẩn sĩ, chán cảnh quan trường, còn cố \r\ncái nhàn của Mạc Thiên Tứ xuất phát từ cuộc sống lao động thanh \r\nbình, ấm no. ông đã thi vị hóa cuộc sống của người dân nơi đây:\r\nCành lành như đợi người lành,\r\nMua nhàn một khắc giá đành nghìn cân...\r\n546',
'LỊCH SỪ VIỆT NAM - TẬP 4\r\nChia nhau lớn bé thấp cao\r\nCá rồng mệt mất, kình ngao lộn tròng.\r\nOng với ve om sòm cụm liễu,\r\nBướm dập dìu lẽo đẽo chòm hoa.\r\nMây lien ba khóm một tòa,\r\nDưới trên biếc mát, gần xa xanh rè.\r\nCách bên khe tiếng ngư ra rả,\r\nGõ mạn thuyền ca họa đòi cung.\r\nDưới rùng mầy trẻ mục đồng,\r\nLưng trâu thổi địch, gió lằng theo khe.\r\nMạc Thiên Tứ còn coi đất Hà Tiên giàu đẹp của ông hơn cả \r\nchốn thị thành:\r\nTuy giang thôn nào khác Tràng an.\r\nĐâu no thỉ đó là an lạc,\r\nLựa phải chen chân chon thị thành.\r\nNgười thanh nhàn, vật thì long thạnh (thịnh),\r\nDân bang kỳ ỳ sánh kỳ tây.\r\nTư tưởng chủ đạo trong Hà Tiên thập cảnh là tư tưởng an nhàn. \r\nCái nhàn trong thơ Mạc Thiên Tứ khác với cái nhàn ưong thơ của \r\ncác nho sĩ Đàng Ngoài là nhàn ẩn sĩ, chán cảnh quan trường, còn cố \r\ncái nhàn của Mạc Thiên Tứ xuất phát từ cuộc sống lao động thanh \r\nbình, ấm no. ông đã thi vị hóa cuộc sống của người dân nơi đây:\r\nCành lành như đợi người lành,\r\nMua nhàn một khắc giá đành nghìn cân...\r\n546\r\nChối ai lợi chuốc danh mua,\r\nVui nghề chài lưới tôm cua tháng ngày.\r\nKé gió mưa, người thi non nước,\r\nHai phía đều chiếm được thu thanh.\r\nTùng kia mấy bức xanh tươi,\r\nĐòi thanh đòi nhã, một nơi một nhàn.\r\nMạc Thiên Tứ từ khi đem đất đai quy phụ, đặt dưới quyền quản \r\nlý của chúa Nguyễn, thì cả dòng họ ông cùng ưật tự xã hội miền Hà \r\nTiên dưới quyền khai thác của ông, đã ưở nên vững vàng và phồn \r\nthịnh. Điều đó càng tạo cho ông nhiều lạc quan ữong cuộc sống và',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `Snowflake/snowflake-arctic-embed-m-v2.0`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.442 |
| cosine_accuracy@3 | 0.6781 |
| cosine_accuracy@5 | 0.7402 |
| cosine_accuracy@10 | 0.8055 |
| cosine_precision@1 | 0.442 |
| cosine_precision@3 | 0.226 |
| cosine_precision@5 | 0.148 |
| cosine_precision@10 | 0.0805 |
| cosine_recall@1 | 0.442 |
| cosine_recall@3 | 0.6781 |
| cosine_recall@5 | 0.7402 |
| cosine_recall@10 | 0.8055 |
| **cosine_ndcg@10** | **0.6283** |
| cosine_mrr@10 | 0.5709 |
| cosine_map@100 | 0.5768 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 21,892 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 26.95 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 373.94 tokens</li><li>max: 596 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 370.91 tokens</li><li>max: 543 tokens</li></ul> |
* Samples:
| anchor | positive | negative_1 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Tính chất kiến trúc của đình làng triều Mạc được thể hiện qua những đặc điểm gì, như số gian, hình dạng, nội thất và cách bố trí không gian trong công trình?</code> | <code>Đình làng là công trình kiến trúc công cộng được dựng nên
<br>băng sự đóng góp của cải và công sức của cả cộng đồng làng xã.
<br>Ngoài chức năng là trụ sở hành chính của cả làng, ngôi đình còn là
<br>trung tâm sinh hoạt văn hóa làng xã, là nơi diễn ra các nghi lễ trọng
<br>đại trong dịp tế lễ thần Thành hoàng làng và tô chức hội hè hăng
<br>năm. Có thê nói, ngôi đình làng là nơi hội tụ sức mạnh của cả cộng
<br>đồng và là biểu trưng đặc sắc nhất của văn hóa làng xã.
<br>
<br>Trong các ngôi đình triều Mạc, Thân thành hoàng có lý lịch
<br>xuất thân khá phong phú. Tản Viên sơn thánh là vị thần có ảnh
<br>hưởng lớn ở xứ Đoài được thờ phụng ở đình Tây Đăng, Thanh Lũng
<br>và nhiều làng xã khác. Thần Cao Sơn, Quý Minh tương truyền là
<br>tướng tâm phúc của Hùng Vương được thờ ở đình làng Lỗ Hạnh.
<br>Dân làng Lỗ Hạnh còn thờ cả Phương Dung công chúa... Từ thế
<br>kỷ XYVI và các thế kỷ tiếp sau, Thần thành hoàng làng trở thành
<br>vị vua tỉnh thần ở các làng xã, tín ngưỡng thờ cúng Thân thành
<br>hoàng càng trở nên phong phú thê hiện qua lễ...</code> | <code>Nguồn tư liệu thành văn cũng như tư liệu điền dã đề cập đến
<br>kiến trúc đình làng triều Mạc không nhiều. Hiện nay chỉ mới xác
<br>định được 6 ngôi đình được xây dựng dưới triều Mạc và 8 ngôi
<br>đình khác được ghi chép trong văn bia, trong đó có 2 ngôi đình kiến
<br>trúc được xây dựng dưới triều Mạc còn khá nguyên vẹn, đó là đình
<br>Lỗ Hạnh và đình Tây Đăng.
<br>
<br>Đình Lỗ Hạnh ở làng Đông Lỗ (Hiệp Hòa, Bắc Giang) được
<br>xây dựng năm 1576. Niên đại tuyệt đối này được biết đến qua dòng
<br>lạc khoản ghi tại cốn bên phải và bên trái của đình. Dòng chữ khắc
<br>ở côn bên trái là "Sùng Khang thập nhất niên, lục nguyệt, thất
<br>nhật tạo” (làm vào ngày mùng 7 tháng 6 năm Sùng Khang thứ 11).
<br>Dòng chữ khắc ở cốn bên phải là "Tuế thứ Bính Tý mạnh xuân tân
<br>tạo" (làm mới vào tháng Giêng năm Bính Tý). Qua hai dòng lạc
<br>khoản có thể đoán định, đình Lỗ Hạnh được khởi công làm từ tháng
<br>Giêng đến tháng Sáu năm Bính Tý, niên hiệu Sùng Khang thứ 11,
<br>tức năm 1576.
<br>
<br>Đình Tây Đăng ở xã Tây Đăng (Ba Vì, Hà Nội). Dòng lạc khoả...</code> |
| <code>Nguyễn Khắc Nhu có vai trò gì trong khởi nghĩa toàn khu vực miền núi Bắc Kỳ của Việt Nam Quốc dân Đảng vào năm 1930?</code> | <code>bị nổ do bất cẩn. Do đó công việc bị phát hiện. Hai người phụ trách
<br>cơ quan chế bom là Đỗ Cương và Quản Trác trốn thoát. Nhiều binh
<br>lính và dân thường bị bắt. Công việc bạo động của Xứ Nhu không
<br>thành. Đúng lúc này Việt Nam Quốc dân Đảng vừa thành lập, cử
<br>người tới mời Xứ Nhu và Việt Nam Dân quốc gia nhập Việt Nam
<br>Quốc dân Đảng. Hầu hết các đồng chí của Xứ Nhu trở thành đảng
<br>viên của Việt Nam Quốc dân Đảng ở vùng Bắc Ninh, Bắc Giang.
<br>Do đó, Việt Nam Quốc dân Đảng mạnh lên về số lượng1. Cùng với
<br>việc phát triển đảng viên ở Bẳc Ninh, Bắc Giang, Việt Nam Quốc
<br>dân Đảng còn thiết lập nhiều cơ sở ở các tỉnh Thái Bình, Hải Dương,
<br>1. Nguyễn Khắc Nhu tức Xứ Nhu (1882-1930), người làng Song Khê, huyện
<br>Yên Dũng, tinh Bắc Giang. Với lòng yêu nuớc và ý chí chống Pháp,
<br>ông dự tính thành lập một tổ chức hoạt động công khai nhăm đào tạo
<br>tài năng cho đất nước lấy tên là "Hội Quốc dân dục tài”. Việc này
<br>không thành công, ông lại lập tổ chức bí mật nhăm bạo động lật đổ ách
<br>áp b...</code> | <code>Chương VIII. Các tổ chức cách mạng ra đời...
<br>Nguyễn Khắc Nhu
<br>Nguồn: Bảo tàng Cách mạng Việt Nam/BTCM - SF 6183 -7/Đ8-a
<br>Tổng bộ trên thực tế là do Kỳ bộ Bắc Kỳ kiêm nhiệm.
<br>Ở Trung Kỳ, Việt Nam Quốc dân Đảng hầu như không phát
<br>triển được lực lượng do sự lấn át của các tổ chức đảng đã có trước
<br>với đường lối rõ ràng hom là Việt Nam Cách mạng Thanh niên và
<br>Tân Việt Cách mạng Đảng. Quần chúng phần lớn đã tham gia vào
<br>hai tổ chức này các cơ sở đã được xây dựng vững chắc từ trước.
<br>Ngoài một chi bộ ở Thanh Hóa, Việt Nam Quốc dân Đảng chi có
<br>một ít đảng viên rải rác tại các tinh Đà Năng, Phan Rang, Phan Thiết....
<br>Trên thực tế, Việt Nam Quốc dân Đảng chi phát triển được cơ
<br>sở mạnh ở Bắc Kỳ và ít nhiều ở Nam Kỳ.
<br>509
<br>LỊCH SỬ VIỆT NAM - TẬP 8
<br>Ở Bẳc Kỳ, sau khi thành lập (25-12-1927), Việt Nam Quốc dân
<br>Đảng phát triển cơ sở khá nhanh.
<br>Lúc bấy giờ, tại hai tỉnh Bắc Giang và Bắc Ninh, Nguyễn Khắc
<br>Nhu tức Xứ Nhu dự định bạo động.
<br>Xứ Nhu là một nhà nho yêu nước nổi tiếng trong...</code> |
| <code>Giá gạo tháng 3-1950 ở Liên khu IV là bao nhiêu đồng/tạ và có chênh lệch gì so với giá gạo ở Liên khu III và Liên khu Việt Bắc?</code> | <code>ngày càng tăng nhanh, nhất là ở Việt Bắc. Giá gạo tăng mạnh
<br>nhất, giá thực phẩm cũng tăng dần theo giá gạo. Giá các mặt hàng
<br>kỹ nghệ tăng chậm hơn. Giá hàng ngoại hóa hầu như không tăng
<br>vỉ trong vùng Pháp chiếm đóng, hàng ngoại hóa tính bằng tiền
<br>Đông Dương không tăng, hom nữa nhân dân cũng ít tiêu thụ hàng
<br>ngoại hóa vì bị cấm.
<br>1. Viện Kinh tế học, Kinh tế Việt Nam từ Cách mạng Tháng Tám đến..., Sách
<br>đã dẫn, tr. 238.
<br>2. Chuơng trình và báo cáo của Bộ Kinh tế về tình hình hoạt động năm 1950.
<br>Trung tâm lưu trữ quốc gia in, phông Phủ Thủ tướng, Hồ sơ số 1914.
<br>488
<br>Chương VI. Việt Nam dân chủ cộng hòa xây dựng..
<br>Giá gạo trong những tháng đầu năm 1950 so với cuối năm 1949
<br>có thay đổi, Liên khu IV (Thanh Hóa) giá tăng lên 154%; Liên khu
<br>III (Hà Đông - Hà Nam) giá tăng lên 153%; Liên khu Việt Bắc
<br>(Thái Nguyên) giá tăng lên 800%.
<br>Giá gạo ở Thái Nguyên từ 1.625 đồng/tạ lên 13.000 đồng/tạ
<br>(tăng 800%); ờ Phú Thọ từ 2.650 đồng/tạ lên 7.500 đồng/tạ (tăng
<br>283%). Mặt khác, ...</code> | <code>mặt hàng tăng lên nhanh chống. Điển hình là giá gạo từ đầu năm
<br>1949 đến cuối năm 1952 tăng rất cao. Xem xét về giá gạo của các
<br>vùng miền sau đây đã chứng tỏ điều đó.
<br>1. Tóm tắt công tác ngoại thương 1945-1954. Phòng Lưu trữ Bộ Thương
<br>mại, phông Ngoại thương, hồ sơ số 11.
<br>314</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### csv
* Dataset: csv
* Size: 21,892 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 26.56 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 369.01 tokens</li><li>max: 559 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 377.56 tokens</li><li>max: 672 tokens</li></ul> |
* Samples:
| anchor | positive | negative_1 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Nguyễn Hoàng đã thực hiện những hành động gì để dần dần tách khỏi sự ràng buộc của họ Trịnh sau khi trở lại Thuận Quảng vào năm 1600, và những hành động này đã ảnh hưởng như thế nào đến mối quan hệ giữa hai dòng họ?</code> | <code>thẳng đối với họ Nguyễn. Trịnh Tùng đã lấy danh nghĩa vua Lê sai
<br>sứ giả là Thiêm đô ngự sử Lê Nghĩa Trạch đem sắc vào phủ dụ
<br>Nguyễn Hoàng và vẫn cho ở lại trấn thủ, hằng năm nộp thuế như
<br>cũ. Cùng với sắc của vua Lê, Trịnh Tùng có gửi thư kèm theo
<br>Chương ĩ. Sự phân liệt Đàng Trong - Đàng Ngoài...
<br>1, Toàn thư. quyển 17, tập IV, Sđd, tr. 200.
<br>2, Đại Nam thực lục, Tiền biên, quyển 1, tập I, Sđd, tr. 34.
<br>3, Đại Nam thực lục, Tiển biên, quyển 1, tập I, Sđd, tr. 35.
<br>39
<br>LỊCH SỬ VIỆT NAM - TẬP 4
<br>"khuyên giữ việc thuế cống". Nguyễn Hoàng sai sứ giả đáp lễ tạ on
<br>vua Lê và gửi thư cho Trịnh Tùng hẹn kết nghĩa thông gia, đem con
<br>gái là Ngọc Tú gả cho Trịnh Tráng (con Trịnh Tùng) lấy danh
<br>nghĩa hôn nhân để duy trì mối quan hệ bề ngoài giao hảo giữa hai
<br>dòng họ vốn có sẵn một mối thù địch.
<br>- Chính sách cùa họ Nguyễn từ khi Nguyễn Hoàng trở lại
<br>Thuận Quảng
<br>Năm 1600, Nguyễn Hoàng ròi được khỏi đất Bẳc trở về Thuận
<br>Quảng bắt đầu thực hiện một chính sách cai trị mói, dần dần tác...</code> | <code>gồm: 2 phủ, 9 huyện, 3 châu; Quảng Nam gồm: 3 phủ, 9 huyện’,
<br>về bộ máy quan chức lúc ấy, những bộ tướng ở 3 ty cũng đều do
<br>họ Trịnh cắt đặt, như Mai cầu làm Tổng binh Thuận Hóa dưới triều
<br>Lê Thế Tông (1573-1599), Vũ Chân làm Hiến sát sứ Thuận Hóa
<br>dưới triều Lê Kính Tông (16(X)-1619)‘*... mục đích để kiềm chế và
<br>kiểm soát mọi hành động cùa họ Nguyễn.
<br>Nhìn chung trong thời kỳ đầu, từ khi vào trấn thủ (1558) đến
<br>khi nhậm chức Tổng trấn tướng quân, quản lĩnh cả vùng Thuận
<br>Quảng (1570), Nguyễn Hoàng bên ngoài lúc nào cũng tỏ ra thần
<br>Chương /. Sự phân liệt Đàng Trong - Đàng Ngoài...
<br>1. Đại Việt thông sừ, Sđd, tr. 316.
<br>2, 3. Đại Nam thực lục, Tiền biên, quyến 1, tập I, Sđd, tr. 29.
<br>4. Đại Nam thực lục, Tiền biên, quyển 2, tập I, Sđd, tr. 45.
<br>35
<br>LỊCH SỪ VIỆT NAM - TẬP 4
<br>phục họ Trịnh và đã cùng góp sức với họ Trịnh đánh triều Mạc. Đôi
<br>khi, Nguyễn Hoàng đã từ Thuận Quảng ra Thanh Hóa chầu vua Lê
<br>và thăm viếng Trịnh Kiểm. Năm 1569, Nguyễn Hoàng ra Tây
<br>Kinh, yết kiến ...</code> |
| <code>Báo cáo của Ủy ban Kháng chiến hành chính Hà Nội về hoạt động giáo dục bù nhìn và tình hình các giáo sư trường Chu Văn An có nội dung gì?</code> | <code>Tài liệu tham khảo
<br>21. Báo cáo sô' 2 BC/I ngày 12-11-1949 và Báo cáo sô' 463
<br>BC/DB ngày 25-12-1949 của Ty Công an H à Nội. Trung
<br>tâm Lưu trữ Quốc gia III, phông Phủ Thủ tướng, Hồ sơ
<br>SỐ921.
<br>28. Báo “Le song” ngày 11-2-1949. Trung tâm Lưu trữ Quốc
<br>gia III, phông Phủ Thủ tướng, Hồ sơ sô' 2002.
<br>29. Báo cáo của u ỷ ban Kháng chiến hành chính Hà Nội vê
<br>hoạt động giáo dục bù nhìn và tình hình các giáo sư
<br>trường Chu Văn An. Trung tâm Lưu trữ Quốc gia III,
<br>phông Phủ Thủ tướng, Hồ sơ số 979.
<br>30. Báo cáo của Tổng Giám đốc Việt N am Công an vụ sô'
<br>122/NCB3 ngày 1-4-1951. Trung tâm Lưu trữ Quốic gia
<br>III, phông Phủ Thủ tướng, Hồ sơ sô' 979.
<br>31. Báo cáo thành tích về cống tác công an trong 8 năm kháng
<br>chiến (1946-1954) của Bộ Công an. Trung tâm Lưu trữ
<br>Quốc gia III, phông Phủ Thủ tướng, Hồ sơ sô' 927.
<br>32. Báo cáo một năm kháng chiến (12-1946 đến 12-1947) của
<br>UBKCHC Khu 12. Trung tâm Lưu trữ Quốc gia III, phông
<br>Phủ Thủ tướng, Hồ sơ sô" 2000.
<br>33. Báo cáo thành tích quăn sự trong 8 n...</code> | <code>tâm Lưu trữ Quốc gia III, phông Phủ Thủ tướng, Hồ sơ
<br>SỐ921.
<br>28. Báo “Le song” ngày 11-2-1949. Trung tâm Lưu trữ Quốc
<br>gia III, phông Phủ Thủ tướng, Hồ sơ sô' 2002.
<br>29. Báo cáo của u ỷ ban Kháng chiến hành chính Hà Nội vê
<br>hoạt động giáo dục bù nhìn và tình hình các giáo sư
<br>trường Chu Văn An. Trung tâm Lưu trữ Quốc gia III,
<br>phông Phủ Thủ tướng, Hồ sơ số 979.
<br>30. Báo cáo của Tổng Giám đốc Việt N am Công an vụ sô'
<br>122/NCB3 ngày 1-4-1951. Trung tâm Lưu trữ Quốic gia
<br>III, phông Phủ Thủ tướng, Hồ sơ sô' 979.
<br>31. Báo cáo thành tích về cống tác công an trong 8 năm kháng
<br>chiến (1946-1954) của Bộ Công an. Trung tâm Lưu trữ
<br>Quốc gia III, phông Phủ Thủ tướng, Hồ sơ sô' 927.
<br>32. Báo cáo một năm kháng chiến (12-1946 đến 12-1947) của
<br>UBKCHC Khu 12. Trung tâm Lưu trữ Quốc gia III, phông
<br>Phủ Thủ tướng, Hồ sơ sô" 2000.
<br>33. Báo cáo thành tích quăn sự trong 8 năm kháng chiến (1946-
<br>1954). Trung tâm Lưu trữ Quốc gia m , phông Phủ Thủ
<br>tướng, Hỗ sơ số 1373.
<br>34. Báo cáo sô'206-CV ngày 28-2-1...</code> |
| <code>Đặc điểm dân số của nước ta ảnh hưởng đến các ngành dịch vụ như thế nào và đòi hỏi những ngành dịch vụ nào cần được ưu tiên phát triển trong quá trình đô thị hóa?</code> | <code>— Trong các thành phố lớn thường hình thành các trung tâm giao dịch,
<br>thương mại. Đó là nơi tập trung các ngân hàng, các văn phòng đại diện
<br>của các công ti, các siêu thị hay các tổ hợp thương mại, dịch vụ lớn...
<br>Ở các thành phố lớn trên thế giới, thường dễ nhận thấy các trung tâm
<br>thương mại này do sự tập trung các ngôi nhà cao tầng, chọc trời. Một
<br>thành phố có thể có trung tâm thương mại chính và một số trung tâm
<br>thương mại nhỏ hơn, kết quả của sự phát triển đô thị.
<br>
<br>— Ở nước ta, các thành phố, thị xã thường có khu hành chính (phân
<br>“đô”) và khu buôn bán, dịch vụ (phân “thị'). Ở Hà Nội, Thành phố
<br>Hồ Chí Minh các trung tâm giao dịch, thương mại của thành phố đang
<br>được hình thành rõ nét.
<br>
<br>CÂU HỎI VÀ BÀI TẬP
<br>
<br>174
<br>
<br>1. Cho biết đặc điểm dân số của nước ta (đông, tăng còn tương đối
<br>nhanh, mức sống đang nâng lên và đô thị hoá đang phát triển với
<br>tốc độ nhanh hơn) có ảnh hưởng đến các ngành dịch vụ như thế
<br>nào ? Các đặc điểm đó đòi hỏi những ngành dịch vụ nào cần được
<br>ưu tiê...</code> | <code>— Ở nước ta, các thành phố, thị xã thường có khu hành chính (phân
<br>“đô”) và khu buôn bán, dịch vụ (phân “thị'). Ở Hà Nội, Thành phố
<br>Hồ Chí Minh các trung tâm giao dịch, thương mại của thành phố đang
<br>được hình thành rõ nét.
<br>
<br>CÂU HỎI VÀ BÀI TẬP
<br>
<br>174
<br>
<br>1. Cho biết đặc điểm dân số của nước ta (đông, tăng còn tương đối
<br>nhanh, mức sống đang nâng lên và đô thị hoá đang phát triển với
<br>tốc độ nhanh hơn) có ảnh hưởng đến các ngành dịch vụ như thế
<br>nào ? Các đặc điểm đó đòi hỏi những ngành dịch vụ nào cần được
<br>ưu tiên phát triển ?
<br>
<br>2. Tại sao các thành phố lớn cũng đồng thời là các trung tâm dịch vụ
<br>lớn 2
<br>Bài 49
<br>
<br>VẠI TRÒ, ĐẶC ĐIỂM. ¬
<br>VÀ CÁC NHÂN TỐ ẢNH HƯỞNG ĐẾN PHÁT TRIỂN
<br>VÀ PHÂN BỐ NGÀNH GIAO THÔNG VẬN TẢI
<br>
<br>I- VAI TRÒ VÀ ĐẶC ĐIỂM CỦA NGÀNH GIAO THÔNG VẬN TẢI
<br>
<br>1. Vai trò
<br>
<br>2. Đặc điểm
<br>
<br>— Giao thông vận tải là một ngành dịch vụ, tham gia vào việc cung ứng
<br>vật tư ki thuật, nguyên liệu, năng lượng cho các cơ sở sản xuất và đưa
<br>sản phẩm đến thị trường tiêu thụ, giúp cho cá...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `learning_rate`: 3e-06
- `num_train_epochs`: 2
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | Snowflake/snowflake-arctic-embed-m-v2.0_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:---------------:|:------------------------------------------------------:|
| 0.0305 | 100 | 1.0054 | 0.9320 | 0.4713 |
| 0.0609 | 200 | 0.962 | 0.8866 | 0.5006 |
| 0.0914 | 300 | 0.9221 | 0.8346 | 0.5321 |
| 0.1218 | 400 | 0.8753 | 0.7926 | 0.5534 |
| 0.1523 | 500 | 0.821 | 0.7599 | 0.5743 |
| 0.1827 | 600 | 0.8555 | 0.7304 | 0.5811 |
| 0.2132 | 700 | 0.8364 | 0.7109 | 0.5950 |
| 0.2436 | 800 | 0.7915 | 0.6946 | 0.6048 |
| 0.2741 | 900 | 0.7493 | 0.6802 | 0.6099 |
| 0.3045 | 1000 | 0.7961 | 0.6659 | 0.6154 |
| 0.3350 | 1100 | 0.7567 | 0.6535 | 0.6202 |
| 0.3654 | 1200 | 0.7333 | 0.6392 | 0.6223 |
| 0.3959 | 1300 | 0.6931 | 0.6284 | 0.6163 |
| 0.4263 | 1400 | 0.6828 | 0.6217 | 0.6195 |
| 0.4568 | 1500 | 0.6773 | 0.6149 | 0.6247 |
| 0.4872 | 1600 | 0.6926 | 0.6113 | 0.6218 |
| 0.5177 | 1700 | 0.7162 | 0.6015 | 0.6276 |
| 0.5481 | 1800 | 0.6752 | 0.5939 | 0.6167 |
| 0.5786 | 1900 | 0.7003 | 0.5878 | 0.6242 |
| 0.6090 | 2000 | 0.6912 | 0.5850 | 0.6184 |
| 0.6395 | 2100 | 0.6403 | 0.5822 | 0.6265 |
| 0.6699 | 2200 | 0.6457 | 0.5801 | 0.6234 |
| 0.7004 | 2300 | 0.6029 | 0.5764 | 0.6209 |
| 0.7308 | 2400 | 0.6622 | 0.5731 | 0.6251 |
| 0.7613 | 2500 | 0.6592 | 0.5727 | 0.6369 |
| 0.7917 | 2600 | 0.6397 | 0.5710 | 0.6364 |
| 0.8222 | 2700 | 0.6741 | 0.5673 | 0.6294 |
| 0.8526 | 2800 | 0.6762 | 0.5657 | 0.6302 |
| 0.8831 | 2900 | 0.6131 | 0.5612 | 0.6195 |
| 0.9135 | 3000 | 0.6579 | 0.5578 | 0.6216 |
| 0.9440 | 3100 | 0.7297 | 0.5541 | 0.6248 |
| 0.9744 | 3200 | 0.63 | 0.5553 | 0.6293 |
| 1.0049 | 3300 | 0.6348 | 0.5536 | 0.6224 |
| 1.0353 | 3400 | 0.5869 | 0.5540 | 0.6231 |
| 1.0658 | 3500 | 0.5854 | 0.5526 | 0.6255 |
| 1.0962 | 3600 | 0.5662 | 0.5509 | 0.6184 |
| 1.1267 | 3700 | 0.5839 | 0.5499 | 0.6154 |
| 1.1571 | 3800 | 0.5368 | 0.5487 | 0.6147 |
| 1.1876 | 3900 | 0.608 | 0.5467 | 0.6073 |
| 1.2180 | 4000 | 0.5469 | 0.5459 | 0.6097 |
| 1.2485 | 4100 | 0.5642 | 0.5458 | 0.6159 |
| 1.2789 | 4200 | 0.5661 | 0.5447 | 0.6155 |
| 1.3094 | 4300 | 0.577 | 0.5427 | 0.6152 |
| 1.3398 | 4400 | 0.6429 | 0.5415 | 0.6192 |
| 1.3703 | 4500 | 0.5784 | 0.5414 | 0.6148 |
| 1.4007 | 4600 | 0.5615 | 0.5418 | 0.6204 |
| 1.4312 | 4700 | 0.5385 | 0.5413 | 0.6208 |
| 1.4616 | 4800 | 0.5904 | 0.5407 | 0.6214 |
| 1.4921 | 4900 | 0.5455 | 0.5415 | 0.6293 |
| 1.5225 | 5000 | 0.5814 | 0.5408 | 0.6265 |
| 1.5530 | 5100 | 0.5349 | 0.5403 | 0.6245 |
| 1.5834 | 5200 | 0.5816 | 0.5396 | 0.6254 |
| 1.6139 | 5300 | 0.5429 | 0.5399 | 0.6256 |
| 1.6443 | 5400 | 0.5678 | 0.5396 | 0.6252 |
| 1.6748 | 5500 | 0.5707 | 0.5386 | 0.6260 |
| 1.7052 | 5600 | 0.5245 | 0.5375 | 0.6213 |
| 1.7357 | 5700 | 0.5365 | 0.5371 | 0.6218 |
| 1.7661 | 5800 | 0.4856 | 0.5368 | 0.6247 |
| 1.7966 | 5900 | 0.5577 | 0.5366 | 0.6266 |
| 1.8270 | 6000 | 0.5392 | 0.5371 | 0.6283 |
| 1.8575 | 6100 | 0.5559 | 0.5362 | 0.6274 |
| 1.8879 | 6200 | 0.5149 | 0.5363 | 0.6277 |
| 1.9184 | 6300 | 0.5221 | 0.5358 | 0.6281 |
| 1.9488 | 6400 | 0.606 | 0.5357 | 0.6294 |
| 1.9793 | 6500 | 0.5251 | 0.5358 | 0.6283 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.5.1
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf | RichardErkhov | "2024-06-02T16:51:43Z" | 7 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-06-02T13:53:14Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Noromaid-7b-v0.1.1 - GGUF
- Model creator: https://huggingface.co/NeverSleep/
- Original model: https://huggingface.co/NeverSleep/Noromaid-7b-v0.1.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Noromaid-7b-v0.1.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q2_K.gguf) | Q2_K | 2.53GB |
| [Noromaid-7b-v0.1.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Noromaid-7b-v0.1.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Noromaid-7b-v0.1.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Noromaid-7b-v0.1.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Noromaid-7b-v0.1.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q3_K.gguf) | Q3_K | 3.28GB |
| [Noromaid-7b-v0.1.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Noromaid-7b-v0.1.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Noromaid-7b-v0.1.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Noromaid-7b-v0.1.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Noromaid-7b-v0.1.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Noromaid-7b-v0.1.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Noromaid-7b-v0.1.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q4_K.gguf) | Q4_K | 4.07GB |
| [Noromaid-7b-v0.1.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Noromaid-7b-v0.1.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Noromaid-7b-v0.1.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Noromaid-7b-v0.1.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Noromaid-7b-v0.1.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q5_K.gguf) | Q5_K | 4.78GB |
| [Noromaid-7b-v0.1.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Noromaid-7b-v0.1.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Noromaid-7b-v0.1.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q6_K.gguf) | Q6_K | 5.53GB |
| [Noromaid-7b-v0.1.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7b-v0.1.1-gguf/blob/main/Noromaid-7b-v0.1.1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
---

---
# Disclaimer:
## This is a ***TEST*** version, don't expect everything to work!!!
You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)**
---
# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Tired of the same merges everytime? Here it is, the Noromaid-7b-v0.1 model. Suitable for RP, ERP and general stuff.
[Recommended generation settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains fp16 files of Noromaid-7b-v0.1.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-7b-v0.1)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-7b-v0.1-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Custom format:
UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
OLD SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Training data used:
- [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output.
- [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it!
This is a full finetune.
Trained until 1+1/2 epoch(1500 steps), trained on mistral 0.1 7b base.
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
RichardErkhov/unsloth_-_gemma-7b-8bits | RichardErkhov | "2024-05-15T23:38:40Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-15T23:30:48Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-7b - bnb 8bits
- Model creator: https://huggingface.co/unsloth/
- Original model: https://huggingface.co/unsloth/gemma-7b/
Original model description:
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- gemma
- gemma-7b
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
MaziyarPanahi/shisa-7b-v1-Mistral-7B-Instruct-v0.2-slerp | MaziyarPanahi | "2024-01-11T23:22:47Z" | 24 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"augmxnt/shisa-7b-v1",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-11T23:17:48Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- augmxnt/shisa-7b-v1
---
# shisa-7b-v1-Mistral-7B-Instruct-v0.2-slerp
shisa-7b-v1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [augmxnt/shisa-7b-v1](https://huggingface.co/augmxnt/shisa-7b-v1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: augmxnt/shisa-7b-v1
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/shisa-7b-v1-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
unity/sentis-neural-cellular-automata | unity | "2024-05-07T13:34:58Z" | 71 | 2 | unity-sentis | [
"unity-sentis",
"onnx",
"reinforcement-learning",
"license:mit",
"region:us"
] | reinforcement-learning | "2024-01-12T02:51:48Z" | ---
license: mit
library_name: unity-sentis
pipeline_tag: reinforcement-learning
---
# Neural Cellular Automata in Unity Sentis Format (Version 1.3.0-pre.3*)
*Version 1.3.0 Sentis files are not compatible with Sentis 1.4.0 and need to be recreated/downloaded
This is the [Neural Cellular Automata](https://distill.pub/2020/growing-ca/) model in Sentis Format to run in Unity 2023.
There are three trained files which create a turtle 🐢, lizard 🦎 and a 💩.
## How to use
* Create a new scene in Unity 2023
* Install `com.unity.sentis` version `1.3.0-pre.3` from the package manager
* Add the `RunAutomata.cs` to the Main Camera
* In `Assets/StreamingAssets` put the `lizard.sentis`, `turtle.sentis` and `poop.sentis` files.
* Create a quad or image and attach an unlit transparent material to it.
* Drag the same material into the `outputMaterial` field
* You can select the different models in the inspector window |
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e8_s55555_v4_l55_v20_extra | KingKazma | "2023-09-14T19:45:01Z" | 3 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-14T19:44:59Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
zelk12/MT5-Gen5-MA-gemma-2-MTg2MT2-9B | zelk12 | "2024-12-29T13:04:05Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Gen2-gemma-2-9B",
"base_model:merge:zelk12/MT-Gen2-gemma-2-9B",
"base_model:zelk12/MT2-gemma-2-9B",
"base_model:merge:zelk12/MT2-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-29T12:57:43Z" | ---
base_model:
- zelk12/MT-Gen2-gemma-2-9B
- zelk12/MT2-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Gen2-gemma-2-9B](https://huggingface.co/zelk12/MT-Gen2-gemma-2-9B)
* [zelk12/MT2-gemma-2-9B](https://huggingface.co/zelk12/MT2-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Gen2-gemma-2-9B
- model: zelk12/MT2-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT-Gen2-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
YanweiLi/llama-vid-13b-full-224-video-fps-1 | YanweiLi | "2023-12-03T06:14:18Z" | 29 | 2 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"vision-language model",
"llama",
"video understanding",
"arxiv:2311.17043",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-29T06:13:00Z" | ---
tags:
- vision-language model
- llama
- video understanding
---
# LLaMA-VID Model Card
<a href='https://llama-vid.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
<a href='https://arxiv.org/abs/2311.17043'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
## Model details
LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token.
**Model type:**
LLaMA-VID is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. We build this repo based on LLaVA.
**Model date:**
llama-vid-13b-full-224-video-fps-1 was trained on 11/2023.
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/dvlab-research/LLaMA-VID/issues
## Intended use
**Primary intended uses:**
The primary use of LLaMA-VID is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training data
This model is trained based on image data from LLaVA-1.5 dataset, and video data from WebVid and ActivityNet datasets, including
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
- 232K video-caption pairs sampled from the WebVid 2.5M dataset.
- 98K videos from ActivityNet with QA pairs from Video-ChatGPT. |
isspek/roberta-base_monkeypox_top3_3_2e-5_16 | isspek | "2025-02-24T22:16:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-24T22:16:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sajjad55/wsdbanglat5_2e4_15 | sajjad55 | "2024-06-09T09:37:15Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:csebuetnlp/banglat5",
"base_model:finetune:csebuetnlp/banglat5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-09T05:31:14Z" | ---
base_model: csebuetnlp/banglat5
tags:
- generated_from_trainer
model-index:
- name: wsdbanglat5_2e4_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsdbanglat5_2e4_15
This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0234 | 1.0 | 1481 | 0.0182 |
| 0.019 | 2.0 | 2962 | 0.0126 |
| 0.0893 | 3.0 | 4443 | 0.0043 |
| 0.0363 | 4.0 | 5924 | 0.0880 |
| 0.002 | 5.0 | 7405 | 0.0015 |
| 0.0017 | 6.0 | 8886 | 0.0011 |
| 0.0011 | 7.0 | 10367 | 0.0011 |
| 0.0007 | 8.0 | 11848 | 0.0009 |
| 0.0011 | 9.0 | 13329 | 0.0009 |
| 0.0006 | 10.0 | 14810 | 0.0008 |
| 0.0003 | 11.0 | 16291 | 0.0010 |
| 0.0003 | 12.0 | 17772 | 0.0009 |
| 0.0002 | 13.0 | 19253 | 0.0009 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mgoin/Nemotron-4-340B-Instruct-hf | mgoin | "2024-08-08T14:17:46Z" | 60 | 4 | transformers | [
"transformers",
"safetensors",
"nemotron",
"text-generation",
"vllm",
"conversational",
"base_model:nvidia/Nemotron-4-340B-Instruct",
"base_model:finetune:nvidia/Nemotron-4-340B-Instruct",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-07-25T02:19:16Z" | ---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
inference: false
fine-tuning: false
tags:
- vllm
base_model: nvidia/Nemotron-4-340B-Instruct
---
## Nemotron-4-340B-Instruct-hf
Converted checkpoint of [nvidia/Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct). Specifically it was produced from the [v1.0 .nemo checkpoint on NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/nemotron-4-340b-instruct/files).
You can deploy this model with `vllm>=0.5.4` ([PR#6611](https://github.com/vllm-project/vllm/pull/6611)):
```
vllm serve mgoin/Nemotron-4-340B-Instruct-hf --tensor-parallel-size 16
```
### Evaluations
All the below evaluations were run with the [FP8 checkpoint](https://huggingface.co/mgoin/Nemotron-4-340B-Instruct-hf-FP8) using `lm-eval==0.4.3` on 8xA100 GPUs.
```
lm_eval --model vllm --model_args pretrained=/home/mgoin/code/Nemotron-4-340B-Instruct-hf-FP8,tensor_parallel_size=8,distributed_executor_backend="ray",gpu_memory_utilization=0.6,enforce_eager=True --tasks mmlu --num_fewshot 0 --batch_size 4
vllm (pretrained=/home/mgoin/code/Nemotron-4-340B-Instruct-hf-FP8,tensor_parallel_size=8,distributed_executor_backend=ray,gpu_memory_utilization=0.6,enforce_eager=True), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 4
| Tasks |Version|Filter|n-shot|Metric| |Value | |Stderr|
|---------------------------------------|-------|------|-----:|------|---|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |↑ |0.7905|± |0.0033|
| - abstract_algebra | 0|none | 0|acc |↑ |0.5100|± |0.0502|
| - anatomy | 0|none | 0|acc |↑ |0.8074|± |0.0341|
| - astronomy | 0|none | 0|acc |↑ |0.8816|± |0.0263|
| - business_ethics | 0|none | 0|acc |↑ |0.7300|± |0.0446|
| - clinical_knowledge | 0|none | 0|acc |↑ |0.8453|± |0.0223|
| - college_biology | 0|none | 0|acc |↑ |0.9236|± |0.0222|
| - college_chemistry | 0|none | 0|acc |↑ |0.5300|± |0.0502|
| - college_computer_science | 0|none | 0|acc |↑ |0.7400|± |0.0441|
| - college_mathematics | 0|none | 0|acc |↑ |0.4500|± |0.0500|
| - college_medicine | 0|none | 0|acc |↑ |0.7630|± |0.0324|
| - college_physics | 0|none | 0|acc |↑ |0.5490|± |0.0495|
| - computer_security | 0|none | 0|acc |↑ |0.8300|± |0.0378|
| - conceptual_physics | 0|none | 0|acc |↑ |0.8043|± |0.0259|
| - econometrics | 0|none | 0|acc |↑ |0.7105|± |0.0427|
| - electrical_engineering | 0|none | 0|acc |↑ |0.7448|± |0.0363|
| - elementary_mathematics | 0|none | 0|acc |↑ |0.6667|± |0.0243|
| - formal_logic | 0|none | 0|acc |↑ |0.5952|± |0.0439|
| - global_facts | 0|none | 0|acc |↑ |0.4900|± |0.0502|
| - high_school_biology | 0|none | 0|acc |↑ |0.9097|± |0.0163|
| - high_school_chemistry | 0|none | 0|acc |↑ |0.7143|± |0.0318|
| - high_school_computer_science | 0|none | 0|acc |↑ |0.9100|± |0.0288|
| - high_school_european_history | 0|none | 0|acc |↑ |0.8788|± |0.0255|
| - high_school_geography | 0|none | 0|acc |↑ |0.9242|± |0.0189|
| - high_school_government_and_politics| 0|none | 0|acc |↑ |0.9845|± |0.0089|
| - high_school_macroeconomics | 0|none | 0|acc |↑ |0.8333|± |0.0189|
| - high_school_mathematics | 0|none | 0|acc |↑ |0.4630|± |0.0304|
| - high_school_microeconomics | 0|none | 0|acc |↑ |0.8824|± |0.0209|
| - high_school_physics | 0|none | 0|acc |↑ |0.6159|± |0.0397|
| - high_school_psychology | 0|none | 0|acc |↑ |0.9394|± |0.0102|
| - high_school_statistics | 0|none | 0|acc |↑ |0.7639|± |0.0290|
| - high_school_us_history | 0|none | 0|acc |↑ |0.9412|± |0.0165|
| - high_school_world_history | 0|none | 0|acc |↑ |0.9409|± |0.0153|
| - human_aging | 0|none | 0|acc |↑ |0.8072|± |0.0265|
| - human_sexuality | 0|none | 0|acc |↑ |0.8855|± |0.0279|
| - humanities |N/A |none | 0|acc |↑ |0.7594|± |0.0060|
| - international_law | 0|none | 0|acc |↑ |0.9091|± |0.0262|
| - jurisprudence | 0|none | 0|acc |↑ |0.8704|± |0.0325|
| - logical_fallacies | 0|none | 0|acc |↑ |0.8528|± |0.0278|
| - machine_learning | 0|none | 0|acc |↑ |0.6786|± |0.0443|
| - management | 0|none | 0|acc |↑ |0.8641|± |0.0339|
| - marketing | 0|none | 0|acc |↑ |0.9359|± |0.0160|
| - medical_genetics | 0|none | 0|acc |↑ |0.8400|± |0.0368|
| - miscellaneous | 0|none | 0|acc |↑ |0.9221|± |0.0096|
| - moral_disputes | 0|none | 0|acc |↑ |0.8382|± |0.0198|
| - moral_scenarios | 0|none | 0|acc |↑ |0.6168|± |0.0163|
| - nutrition | 0|none | 0|acc |↑ |0.8791|± |0.0187|
| - other |N/A |none | 0|acc |↑ |0.8214|± |0.0065|
| - philosophy | 0|none | 0|acc |↑ |0.8521|± |0.0202|
| - prehistory | 0|none | 0|acc |↑ |0.8796|± |0.0181|
| - professional_accounting | 0|none | 0|acc |↑ |0.6383|± |0.0287|
| - professional_law | 0|none | 0|acc |↑ |0.6838|± |0.0119|
| - professional_medicine | 0|none | 0|acc |↑ |0.8824|± |0.0196|
| - professional_psychology | 0|none | 0|acc |↑ |0.8611|± |0.0140|
| - public_relations | 0|none | 0|acc |↑ |0.8000|± |0.0383|
| - security_studies | 0|none | 0|acc |↑ |0.8204|± |0.0246|
| - social_sciences |N/A |none | 0|acc |↑ |0.8811|± |0.0057|
| - sociology | 0|none | 0|acc |↑ |0.9055|± |0.0207|
| - stem |N/A |none | 0|acc |↑ |0.7180|± |0.0076|
| - us_foreign_policy | 0|none | 0|acc |↑ |0.9600|± |0.0197|
| - virology | 0|none | 0|acc |↑ |0.5482|± |0.0387|
| - world_religions | 0|none | 0|acc |↑ |0.9006|± |0.0229|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|-------|------|-----:|------|---|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |↑ |0.7905|± |0.0033|
| - humanities |N/A |none | 0|acc |↑ |0.7594|± |0.0060|
| - other |N/A |none | 0|acc |↑ |0.8214|± |0.0065|
| - social_sciences|N/A |none | 0|acc |↑ |0.8811|± |0.0057|
| - stem |N/A |none | 0|acc |↑ |0.7180|± |0.0076|
```
The [original paper](https://d1qx31qr3h6wln.cloudfront.net/publications/Nemotron_4_340B_8T_0.pdf) evals for reference:

|
damgomz/ft_4_17e6_x2 | damgomz | "2024-07-13T11:56:42Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-20T15:48:28Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 94193.79744768144 |
| Emissions (Co2eq in kg) | 0.0569980947595841 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.112007470722169 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0981174684422712 |
| Consumed energy (kWh) | 1.2101249391644375 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.18132306008678678 |
| Emissions (Co2eq in kg) | 0.036892570667008566 |
## Note
12 juillet 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/fp_bs16_lr1e4_x2 |
| model_name | ft_4_17e6_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.7e-05 |
| batch_size | 4 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.716950 | 0.494992 |
| 1 | 0.294474 | 0.210283 | 0.936422 |
| 2 | 0.169180 | 0.209291 | 0.923463 |
| 3 | 0.109349 | 0.239753 | 0.928542 |
| 4 | 0.067060 | 0.297160 | 0.910511 |
| 5 | 0.049154 | 0.309301 | 0.903968 |
| 6 | 0.053286 | 0.337630 | 0.920875 |
|
jester20/saiga_llama3_sum_8bit_qdora_v02 | jester20 | "2024-05-31T11:20:54Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:IlyaGusev/saiga_llama3_8b",
"base_model:adapter:IlyaGusev/saiga_llama3_8b",
"license:other",
"region:us"
] | null | "2024-05-31T11:20:48Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: IlyaGusev/saiga_llama3_8b
model-index:
- name: qdora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qdora
This model is a fine-tuned version of [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.794 | 0.1389 | 25 | 1.6986 |
| 1.6478 | 0.2778 | 50 | 1.6411 |
| 1.5549 | 0.4167 | 75 | 1.5937 |
| 1.4962 | 0.5556 | 100 | 1.5652 |
| 1.4841 | 0.6944 | 125 | 1.5430 |
| 1.563 | 0.8333 | 150 | 1.5320 |
| 1.51 | 0.9722 | 175 | 1.5287 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
niltheory/Reinforce-Pixelcopter-PLE-v0 | niltheory | "2023-11-28T05:03:22Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-28T01:40:23Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.80 +/- 28.71
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Grohv/onihorns-lora | Grohv | "2024-11-08T23:01:34Z" | 47 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-11-01T22:03:43Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- text: A high detailed photo of a woman with horns in the style of onihorns_lora
output:
url: images/example_uhupo7p3j.png
- text: A high detailed photo of a woman with horns in the style of onihorns_lora
output:
url: images/example_k1j63ld1u.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: onihorns_lora
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# onihorns_lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `onihorns_lora` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Bhanu9Prakash/ppo-LunarLander-v2 | Bhanu9Prakash | "2023-05-08T17:43:24Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-08T17:43:07Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.27 +/- 23.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Stern5497/sbert-legal-swiss-roberta-base | Stern5497 | "2023-05-22T12:49:18Z" | 14 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-05-22T12:34:24Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8301 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 5000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 830,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
fgaim/tiroberta-geezswitch | fgaim | "2022-05-13T18:27:38Z" | 4 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"geezlab",
"ti",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-04-30T22:41:38Z" | ---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
- text: "ወአመ ሳብዕት ዕለት ቦዘወፅአ እምውስተ ሕዝብ ከመ ያስተጋብእ ወኢረከበ።"
- text: "እሊ እግል ኖሱ አሳስ ተጠውር ወዐቦት ክምሰልቱ ሸክ ኢወትውዴ።"
- text: "ኣኩኽር ፡ ልሽክክ ናው ጀረቢነዅስክ ክሙኑኽር ክራውል ሕበርሲድኖ ገረሰነኵ።"
- text: "ነገ ለግማሽ ፍፃሜ ያለፉትን አሳውቀንና አስመርጠናችሁ እንሸልማለን።"
tags:
- geezlab
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: geezswitch-tiroberta
results: []
license: cc-by-4.0
---
# TiRoBERTa-GeezSwitch
This model is a fine-tuned version of [fgaim/tiroberta-base](https://huggingface.co/fgaim/tiroberta-base) on the [GeezSwitch](https://github.com/fgaim/geezswitch-data) dataset.
It achieves the following results on the test set:
- F1: 0.9948
- Recall: 0.9948
- Precision: 0.9948
- Accuracy: 0.9948
- Loss: 0.0222
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- seed: 42
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
### Citation
If you use this model or the GeezSwitch model in your research, please cite as follows:
```markdown
@inproceedings{fgaim2022geezswitch,
title={GeezSwitch: Language Identification in Typologically Related Low-resourced East African Languages},
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
year={2022}
}
```
|
kholiavko/streaming-17-03 | kholiavko | "2025-03-18T09:23:03Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-18T09:22:31Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kholiavko
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yuri-gagarin/togg | yuri-gagarin | "2025-01-30T11:26:47Z" | 13 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-30T11:09:18Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOGG
---
# Togg
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOGG` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('yuri-gagarin/togg', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
kk-aivio/9a4c5ad1-df47-4bc3-aae0-faa188f644a8 | kk-aivio | "2025-01-26T05:44:15Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | "2025-01-26T05:43:21Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9a4c5ad1-df47-4bc3-aae0-faa188f644a8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 558c519d44160381_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/558c519d44160381_train_data.json
type:
field_instruction: question
field_output: answers
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/9a4c5ad1-df47-4bc3-aae0-faa188f644a8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/558c519d44160381_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e2176481-25d5-4e19-9520-315ccb160b4d
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e2176481-25d5-4e19-9520-315ccb160b4d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9a4c5ad1-df47-4bc3-aae0-faa188f644a8
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8278 | 0.0001 | 1 | 10.8396 |
| 10.8379 | 0.0004 | 3 | 10.8395 |
| 10.8363 | 0.0007 | 6 | 10.8386 |
| 10.8354 | 0.0011 | 9 | 10.8372 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jcjo/peb-sdxl-lora | jcjo | "2024-04-11T10:29:23Z" | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-04-11T03:33:54Z" | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a <peb0><peb1> woman'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a <peb0><peb1> woman
license: openrail++
---
# SDXL LoRA DreamBooth - jcjo/peb-sdxl-lora
<Gallery />
## Model description
### These are jcjo/peb-sdxl-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB.safetensors` here 💾](/jcjo/peb-sdxl-lora/blob/main//content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB_emb.safetensors` here 💾](/jcjo/peb-sdxl-lora/blob/main//content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB_emb` to your prompt. For example, `a <peb0><peb1> woman`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jcjo/peb-sdxl-lora', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='jcjo/peb-sdxl-lora', filename='/content/drive/MyDrive/Colab_Notebooks/DA Matching Algorithm/AI/DanceAI/ckpt/PEB_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('a photo of <peb0><peb1> woman, black suit, walking on a sidewalk, looking at viewer, smiling, waving, (8k, RAW photo, best quality, masterpiece:1.2), (realistic, photo-realistic:1.37), professional lighting, photon mapping, radiosity, physically-based rendering, octane render.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
## Details
All [Files & versions](/jcjo/peb-sdxl-lora/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
vaska35/yurich01 | vaska35 | "2025-02-27T08:43:31Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2025-02-27T08:42:58Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HarshV9/finetuning-sentiment-model-8-labels | HarshV9 | "2023-06-23T12:21:51Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-22T16:07:12Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-8-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-8-labels
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1854
- eval_accuracy: 0.5598
- eval_f1: 0.5598
- eval_runtime: 190.081
- eval_samples_per_second: 198.205
- eval_steps_per_second: 6.197
- epoch: 2.88
- step: 13550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu116
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits