modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 18:27:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 18:25:26
card
stringlengths
11
1.01M
Best000/679a47bb-4a1a-4fdd-a604-9595de9aea29
Best000
2025-01-31T11:13:44Z
9
0
peft
[ "peft", "safetensors", "gpt_neo", "axolotl", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-125m", "base_model:adapter:EleutherAI/gpt-neo-125m", "license:mit", "region:us" ]
null
2025-01-31T11:12:36Z
--- library_name: peft license: mit base_model: EleutherAI/gpt-neo-125m tags: - axolotl - generated_from_trainer model-index: - name: 679a47bb-4a1a-4fdd-a604-9595de9aea29 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/gpt-neo-125m bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d42d05d70d1177b5_train_data.json ds_type: json format: custom path: /workspace/input_data/d42d05d70d1177b5_train_data.json type: field_instruction: problem field_output: generated_solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/679a47bb-4a1a-4fdd-a604-9595de9aea29 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/d42d05d70d1177b5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 71fc9dbd-6b0f-4a30-b082-173afe1f7f81 wandb_project: Birthday-SN56-15-Gradients-On-Demand wandb_run: your_name wandb_runid: 71fc9dbd-6b0f-4a30-b082-173afe1f7f81 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 679a47bb-4a1a-4fdd-a604-9595de9aea29 This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 1.3890 | | 5.1623 | 0.0049 | 13 | 1.3465 | | 5.2261 | 0.0098 | 26 | 1.2803 | | 5.1446 | 0.0147 | 39 | 1.2563 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nathanialhunt/ee9aa00c-d243-4594-bbf2-d29b3bfe8f2b
nathanialhunt
2025-01-31T11:13:36Z
9
0
peft
[ "peft", "safetensors", "gpt_neo", "axolotl", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-125m", "base_model:adapter:EleutherAI/gpt-neo-125m", "license:mit", "region:us" ]
null
2025-01-31T11:12:34Z
--- library_name: peft license: mit base_model: EleutherAI/gpt-neo-125m tags: - axolotl - generated_from_trainer model-index: - name: ee9aa00c-d243-4594-bbf2-d29b3bfe8f2b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/gpt-neo-125m bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d42d05d70d1177b5_train_data.json ds_type: json format: custom path: /workspace/input_data/d42d05d70d1177b5_train_data.json type: field_instruction: problem field_output: generated_solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: nathanialhunt/ee9aa00c-d243-4594-bbf2-d29b3bfe8f2b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/d42d05d70d1177b5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 71fc9dbd-6b0f-4a30-b082-173afe1f7f81 wandb_project: Birthday-SN56-24-Gradients-On-Demand wandb_run: your_name wandb_runid: 71fc9dbd-6b0f-4a30-b082-173afe1f7f81 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # ee9aa00c-d243-4594-bbf2-d29b3bfe8f2b This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2540 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 1.3890 | | 5.159 | 0.0049 | 13 | 1.3432 | | 5.2167 | 0.0098 | 26 | 1.2783 | | 5.1372 | 0.0147 | 39 | 1.2540 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
cgaege/model
cgaege
2025-01-31T11:13:15Z
24
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T11:12:17Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** cgaege - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-f16-GGUF
roleplaiapp
2025-01-31T11:12:33Z
212
0
transformers
[ "transformers", "gguf", "32b", "deekseekr1", "f16", "fuseo1", "llama-cpp", "preview", "qwq", "skyt1", "text-generation", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T11:08:07Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32b - deekseekr1 - f16 - fuseo1 - gguf - llama-cpp - preview - qwq - skyt1 - text-generation --- # roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-f16-GGUF **Repo:** `roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-f16-GGUF` **Original Model:** `FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview` **Quantized File:** `FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-bf16/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-bf16-00001-of-00002.gguf` **Quantization:** `GGUF` **Quantization Method:** `f16` ## Overview This is a GGUF f16 quantized version of FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
clarxus/74bb7c38-8fb8-4b4f-9c6d-e5ab6d1fe242
clarxus
2025-01-31T11:12:02Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "region:us" ]
null
2025-01-31T10:55:41Z
--- library_name: peft base_model: lmsys/vicuna-7b-v1.3 tags: - axolotl - generated_from_trainer model-index: - name: 74bb7c38-8fb8-4b4f-9c6d-e5ab6d1fe242 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-7b-v1.3 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3f8d0d09c2790588_train_data.json ds_type: json format: custom path: /workspace/input_data/3f8d0d09c2790588_train_data.json type: field_input: entities field_instruction: intent field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: clarxus/74bb7c38-8fb8-4b4f-9c6d-e5ab6d1fe242 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/3f8d0d09c2790588_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 1f90a4ba-fe05-46a7-a1bb-1e279be1741b wandb_project: Gradients-On-Seven wandb_run: your_name wandb_runid: 1f90a4ba-fe05-46a7-a1bb-1e279be1741b warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 74bb7c38-8fb8-4b4f-9c6d-e5ab6d1fe242 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5521 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0060 | 1 | 2.4310 | | 2.2066 | 0.0541 | 9 | 2.0199 | | 1.0756 | 0.1081 | 18 | 0.9353 | | 0.8032 | 0.1622 | 27 | 0.7257 | | 0.6559 | 0.2162 | 36 | 0.6367 | | 0.6319 | 0.2703 | 45 | 0.6004 | | 0.5696 | 0.3243 | 54 | 0.5816 | | 0.5789 | 0.3784 | 63 | 0.5656 | | 0.6445 | 0.4324 | 72 | 0.5597 | | 0.6286 | 0.4865 | 81 | 0.5553 | | 0.5705 | 0.5405 | 90 | 0.5526 | | 0.587 | 0.5946 | 99 | 0.5521 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
blood34/e4af7e34-6545-4a7f-ac56-5f48f551d222
blood34
2025-01-31T11:10:49Z
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:adapter:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T11:02:01Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-410m-deduped tags: - axolotl - generated_from_trainer model-index: - name: e4af7e34-6545-4a7f-ac56-5f48f551d222 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-410m-deduped bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c7ccfb23153eb4e2_train_data.json ds_type: json format: custom path: /workspace/input_data/c7ccfb23153eb4e2_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: blood34/e4af7e34-6545-4a7f-ac56-5f48f551d222 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/c7ccfb23153eb4e2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b112a2de-aff0-4f32-ba1f-4285c58878e4 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b112a2de-aff0-4f32-ba1f-4285c58878e4 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e4af7e34-6545-4a7f-ac56-5f48f551d222 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0924 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.4868 | 0.0868 | 200 | 1.0924 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/1a86e5c6-bbbe-46ad-bf05-187ec82d0853
mrferr3t
2025-01-31T11:08:54Z
9
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:adapter:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "region:us" ]
null
2025-01-31T11:05:08Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-410m-deduped tags: - axolotl - generated_from_trainer model-index: - name: 1a86e5c6-bbbe-46ad-bf05-187ec82d0853 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-410m-deduped bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c7ccfb23153eb4e2_train_data.json ds_type: json format: custom path: /workspace/input_data/c7ccfb23153eb4e2_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/1a86e5c6-bbbe-46ad-bf05-187ec82d0853 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/c7ccfb23153eb4e2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b112a2de-aff0-4f32-ba1f-4285c58878e4 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b112a2de-aff0-4f32-ba1f-4285c58878e4 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1a86e5c6-bbbe-46ad-bf05-187ec82d0853 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 9.3751 | 0.0002 | 1 | 2.4215 | | 5.6137 | 0.0109 | 50 | 1.3772 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
NalDice/askvox-llama3.3-70b-4bit
NalDice
2025-01-31T11:08:16Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-01-31T11:02:44Z
--- base_model: unsloth/llama-3.3-70b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** NalDice - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.3-70b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
minhnguyennnnnn/ad48f848-5d3e-488d-8102-8e3db34e21a7
minhnguyennnnnn
2025-01-31T11:08:03Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T10:59:46Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: ad48f848-5d3e-488d-8102-8e3db34e21a7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0d5d976991e3a752_train_data.json ds_type: json format: custom path: /workspace/input_data/0d5d976991e3a752_train_data.json type: field_input: rational_answer field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: minhnguyennnnnn/ad48f848-5d3e-488d-8102-8e3db34e21a7 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/0d5d976991e3a752_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 88b86b05-eecb-42b3-b66f-ca78bc5345cc wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 88b86b05-eecb-42b3-b66f-ca78bc5345cc warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # ad48f848-5d3e-488d-8102-8e3db34e21a7 This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3804 | 0.2315 | 200 | 0.4475 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
arcwarden46/9b672075-9c55-4cfe-9318-80467e4b8158
arcwarden46
2025-01-31T11:07:31Z
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:adapter:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "region:us" ]
null
2025-01-31T11:02:01Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-410m-deduped tags: - axolotl - generated_from_trainer model-index: - name: 9b672075-9c55-4cfe-9318-80467e4b8158 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-410m-deduped bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c7ccfb23153eb4e2_train_data.json ds_type: json format: custom path: /workspace/input_data/c7ccfb23153eb4e2_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: arcwarden46/9b672075-9c55-4cfe-9318-80467e4b8158 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/c7ccfb23153eb4e2_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b112a2de-aff0-4f32-ba1f-4285c58878e4 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b112a2de-aff0-4f32-ba1f-4285c58878e4 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9b672075-9c55-4cfe-9318-80467e4b8158 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 9.417 | 0.0009 | 1 | 2.4251 | | 5.763 | 0.0434 | 50 | 1.2690 | | 4.9063 | 0.0868 | 100 | 1.0931 | | 4.3884 | 0.1302 | 150 | 0.9528 | | 4.1572 | 0.1736 | 200 | 0.9222 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Hypernap/model
Hypernap
2025-01-31T11:06:08Z
8
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "phi", "sentiment-analysis", "finetuned", "nlp", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-30T17:02:49Z
--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - phi - sentiment-analysis - finetuned - nlp --- # Sentiment Finetuned Phi-3 - **Developed by:** Hypernap - **License:** apache-2.0 - **Finetuned from model :** [unsloth/phi-3.5-mini-instruct-bnb-4bit](https://huggingface.co/unsloth/phi-3.5-mini-instruct-bnb-4bit) This model is a fine-tuned version of the [unsloth/phi-3.5-mini-instruct-bnb-4bit](https://huggingface.co/unsloth/phi-3.5-mini-instruct-bnb-4bit) using a custom sentiment analysis dataset. It was trained with accelerated speed using [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## Model Details This model is a fine-tuned version of the [unsloth/phi-3.5-mini-instruct-bnb-4bit](https://huggingface.co/unsloth/phi-3.5-mini-instruct-bnb-4bit) optimized for sentiment analysis tasks. The original Phi-3 model is a powerful language model, and this fine-tuned version further enhances its capabilities for tasks involving sentiment detection, classification and inference. **Intended Use:** This model is intended for use in tasks such as: * **Sentiment Analysis:** Classifying the sentiment of text as positive, negative, or neutral. * **Customer Feedback Analysis:** Analyzing reviews and feedback for sentiment. * **Social Media Monitoring:** Detecting the sentiment of posts and comments. * **Text Classification:** General text classification involving sentiment labels. * **Opinion Mining:** Understanding the sentiment within text data. **Training Details:** * **Fine-tuning Dataset:** A custom sentiment dataset was used for fine-tuning. (Optional: If your dataset is public, you can include a link or a brief description here.) * **Training Method:** The model was fine-tuned using [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library, which provides optimized training for models. * **Hardware:** The model was trained using [Specify your hardware if you want]. * **Accelerated Training** Using unsloth led to 2x faster training. **Model Evaluation:** * (Optional) Provide links to evaluation metrics or example outputs if you have them available. You can include metrics like: * Accuracy * Precision, Recall and F1 scores * Qualitative analysis of the outputs **Limitations:** * The model's performance may vary on datasets significantly different from the training data. * It may struggle with sarcasm or nuanced expressions of sentiment. * The model is optimized for sentiment analysis tasks, it is not suitable as a generic language model. **Further Information:** * If you have a repository where you keep your training code, datasets, or other relevant information, you can link it here. **Acknowledgements:** * [Unsloth](https://github.com/unslothai/unsloth) for the optimized training library. * Hugging Face for the TRL library and model hosting. * [Optional] If you have used a specific dataset, give credit to the original creators.
robiual-awal/20abf7d0-4633-41d6-b038-a8b7f57c84ee
robiual-awal
2025-01-31T11:05:29Z
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:adapter:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "region:us" ]
null
2025-01-31T11:02:47Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-410m-deduped tags: - axolotl - generated_from_trainer model-index: - name: 20abf7d0-4633-41d6-b038-a8b7f57c84ee results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-410m-deduped bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c7ccfb23153eb4e2_train_data.json ds_type: json format: custom path: /workspace/input_data/c7ccfb23153eb4e2_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: robiual-awal/20abf7d0-4633-41d6-b038-a8b7f57c84ee hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/c7ccfb23153eb4e2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b112a2de-aff0-4f32-ba1f-4285c58878e4 wandb_project: Birthday-SN56-29-Gradients-On-Demand wandb_run: your_name wandb_runid: b112a2de-aff0-4f32-ba1f-4285c58878e4 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 20abf7d0-4633-41d6-b038-a8b7f57c84ee This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 2.3973 | | 5.4772 | 0.0109 | 50 | 1.3241 | | 4.6595 | 0.0217 | 100 | 1.1514 | | 4.5177 | 0.0326 | 150 | 1.0966 | | 4.2382 | 0.0434 | 200 | 1.0750 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
daniel40/afb9b564-f486-4102-a6ed-4cc544158032
daniel40
2025-01-31T11:05:28Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Math-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T11:02:34Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Math-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: afb9b564-f486-4102-a6ed-4cc544158032 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Math-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0ff473b612eed7bf_train_data.json ds_type: json format: custom path: /workspace/input_data/0ff473b612eed7bf_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: daniel40/afb9b564-f486-4102-a6ed-4cc544158032 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/0ff473b612eed7bf_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 79066811-ed4b-4161-8d28-804ae5e605ea wandb_project: Birthday-SN56-27-Gradients-On-Demand wandb_run: your_name wandb_runid: 79066811-ed4b-4161-8d28-804ae5e605ea warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # afb9b564-f486-4102-a6ed-4cc544158032 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0009 | 1 | 1.5160 | | 1.3804 | 0.0120 | 13 | 0.7042 | | 0.6456 | 0.0241 | 26 | 0.3317 | | 0.436 | 0.0361 | 39 | 0.1988 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robiulawaldev/9d16be82-86f2-4010-a6fb-e1c9d6835403
robiulawaldev
2025-01-31T11:04:39Z
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:adapter:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "region:us" ]
null
2025-01-31T11:02:47Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-410m-deduped tags: - axolotl - generated_from_trainer model-index: - name: 9d16be82-86f2-4010-a6fb-e1c9d6835403 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-410m-deduped bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c7ccfb23153eb4e2_train_data.json ds_type: json format: custom path: /workspace/input_data/c7ccfb23153eb4e2_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/9d16be82-86f2-4010-a6fb-e1c9d6835403 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: constant max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/c7ccfb23153eb4e2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b112a2de-aff0-4f32-ba1f-4285c58878e4 wandb_project: Birthday-SN56-35-Gradients-On-Demand wandb_run: your_name wandb_runid: b112a2de-aff0-4f32-ba1f-4285c58878e4 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9d16be82-86f2-4010-a6fb-e1c9d6835403 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 2.3780 | | 4.3033 | 0.0014 | 13 | 1.8024 | | 3.5385 | 0.0028 | 26 | 1.5537 | | 3.0839 | 0.0042 | 39 | 1.4280 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kostiantynk-out/5c716269-8375-4329-9eb5-7592134f77ef
kostiantynk-out
2025-01-31T11:03:32Z
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:adapter:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "region:us" ]
null
2025-01-31T11:01:43Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-410m-deduped tags: - axolotl - generated_from_trainer model-index: - name: 5c716269-8375-4329-9eb5-7592134f77ef results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-410m-deduped bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c7ccfb23153eb4e2_train_data.json ds_type: json format: custom path: /workspace/input_data/c7ccfb23153eb4e2_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk-out/5c716269-8375-4329-9eb5-7592134f77ef hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/c7ccfb23153eb4e2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b112a2de-aff0-4f32-ba1f-4285c58878e4 wandb_project: Birthday-SN56-10-Gradients-On-Demand wandb_run: your_name wandb_runid: b112a2de-aff0-4f32-ba1f-4285c58878e4 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5c716269-8375-4329-9eb5-7592134f77ef This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 2.4197 | | 9.2462 | 0.0028 | 13 | 1.9513 | | 7.5015 | 0.0056 | 26 | 1.6375 | | 6.6061 | 0.0085 | 39 | 1.5458 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
hrasto/llamas3_childes_l
hrasto
2025-01-31T11:02:40Z
23
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T10:04:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q5_K_S-GGUF
roleplaiapp
2025-01-31T11:01:29Z
15
0
transformers
[ "transformers", "gguf", "32b", "5-bit", "Q5_K_S", "deekseekr1", "fuseo1", "llama-cpp", "preview", "qwq", "skyt1", "text-generation", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T11:00:04Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32b - 5-bit - Q5_K_S - deekseekr1 - fuseo1 - gguf - llama-cpp - preview - qwq - skyt1 - text-generation --- # roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q5_K_S-GGUF **Repo:** `roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q5_K_S-GGUF` **Original Model:** `FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview` **Quantized File:** `FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q5_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q5_K_S` ## Overview This is a GGUF Q5_K_S quantized version of FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
nttx/451e28b6-82c8-434e-8dd4-eb29e23ad167
nttx
2025-01-31T11:01:16Z
8
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2b-it", "base_model:adapter:unsloth/gemma-2b-it", "license:apache-2.0", "region:us" ]
null
2025-01-31T10:53:22Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-2b-it tags: - axolotl - generated_from_trainer model-index: - name: 451e28b6-82c8-434e-8dd4-eb29e23ad167 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - b4a69513993621b7_train_data.json ds_type: json format: custom path: /workspace/input_data/b4a69513993621b7_train_data.json type: field_input: output field_instruction: instruction field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/451e28b6-82c8-434e-8dd4-eb29e23ad167 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/b4a69513993621b7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c093f3d1-4356-46fe-b57f-880a4041af51 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c093f3d1-4356-46fe-b57f-880a4041af51 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 451e28b6-82c8-434e-8dd4-eb29e23ad167 This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1988 | 0.1161 | 200 | 2.0920 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
cilooor/9976dbdb-6c0d-419b-832a-7c3a1626f96d
cilooor
2025-01-31T10:59:24Z
9
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "region:us" ]
null
2025-01-31T10:48:04Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 9976dbdb-6c0d-419b-832a-7c3a1626f96d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-1.5B bf16: true chat_template: llama3 data_processes: 24 dataset_prepared_path: null datasets: - data_files: - 35e8b0d0959cde6a_train_data.json ds_type: json format: custom path: /workspace/input_data/35e8b0d0959cde6a_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 4 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: cilooor/9976dbdb-6c0d-419b-832a-7c3a1626f96d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 7.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.07 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine lr_scheduler_warmup_steps: 50 max_grad_norm: 0.3 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/35e8b0d0959cde6a_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.999 adam_epsilon: 1e-8 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 17333 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer total_train_batch_size: 32 train_batch_size: 8 train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe warmup_steps: 30 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9976dbdb-6c0d-419b-832a-7c3a1626f96d This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 17333 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0064 | 1 | nan | | 0.0 | 0.3177 | 50 | nan | | 0.0 | 0.6354 | 100 | nan | | 0.0 | 0.9531 | 150 | nan | | 0.0 | 1.2708 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robiual-awal/cfdc5bfc-e997-446c-bf4a-57225c9828c0
robiual-awal
2025-01-31T10:57:58Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:heegyu/WizardVicuna2-13b-hf", "base_model:adapter:heegyu/WizardVicuna2-13b-hf", "region:us" ]
null
2025-01-31T10:48:05Z
--- library_name: peft base_model: heegyu/WizardVicuna2-13b-hf tags: - axolotl - generated_from_trainer model-index: - name: cfdc5bfc-e997-446c-bf4a-57225c9828c0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: heegyu/WizardVicuna2-13b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e87de6c43674e82c_train_data.json ds_type: json format: custom path: /workspace/input_data/e87de6c43674e82c_train_data.json type: field_input: ingredients field_instruction: method field_output: title format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: robiual-awal/cfdc5bfc-e997-446c-bf4a-57225c9828c0 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/e87de6c43674e82c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f139e0f5-a17c-4821-9823-884d644ea1bb wandb_project: Birthday-SN56-30-Gradients-On-Demand wandb_run: your_name wandb_runid: f139e0f5-a17c-4821-9823-884d644ea1bb warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cfdc5bfc-e997-446c-bf4a-57225c9828c0 This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0006 | 1 | 2.8758 | | 1.0574 | 0.0324 | 50 | 1.0707 | | 0.9616 | 0.0647 | 100 | 1.0250 | | 1.0195 | 0.0971 | 150 | 1.0122 | | 0.9827 | 0.1294 | 200 | 1.0008 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/aa61abc6-5b04-4ba6-95a8-39ac40fdad2a
mrferr3t
2025-01-31T10:57:27Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:heegyu/WizardVicuna2-13b-hf", "base_model:adapter:heegyu/WizardVicuna2-13b-hf", "region:us" ]
null
2025-01-31T10:52:39Z
--- library_name: peft base_model: heegyu/WizardVicuna2-13b-hf tags: - axolotl - generated_from_trainer model-index: - name: aa61abc6-5b04-4ba6-95a8-39ac40fdad2a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: heegyu/WizardVicuna2-13b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e87de6c43674e82c_train_data.json ds_type: json format: custom path: /workspace/input_data/e87de6c43674e82c_train_data.json type: field_input: ingredients field_instruction: method field_output: title format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/aa61abc6-5b04-4ba6-95a8-39ac40fdad2a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/e87de6c43674e82c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f139e0f5-a17c-4821-9823-884d644ea1bb wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f139e0f5-a17c-4821-9823-884d644ea1bb warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # aa61abc6-5b04-4ba6-95a8-39ac40fdad2a This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.5137 | 0.0006 | 1 | 2.9600 | | 0.8605 | 0.0324 | 50 | 1.0810 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
daniel40/1c0692cc-5d58-4b2c-a7c1-152960cde2b0
daniel40
2025-01-31T10:55:33Z
5
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:adapter:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "region:us" ]
null
2025-01-31T10:53:36Z
--- library_name: peft license: llama3 base_model: elyza/Llama-3-ELYZA-JP-8B tags: - axolotl - generated_from_trainer model-index: - name: 1c0692cc-5d58-4b2c-a7c1-152960cde2b0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: elyza/Llama-3-ELYZA-JP-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ec4da503e0b78c02_train_data.json ds_type: json format: custom path: /workspace/input_data/ec4da503e0b78c02_train_data.json type: field_input: Category field_instruction: Resume_str field_output: Resume_html format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: daniel40/1c0692cc-5d58-4b2c-a7c1-152960cde2b0 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ec4da503e0b78c02_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3b179f40-3440-4abe-be6f-d304b9501d33 wandb_project: Birthday-SN56-28-Gradients-On-Demand wandb_run: your_name wandb_runid: 3b179f40-3440-4abe-be6f-d304b9501d33 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1c0692cc-5d58-4b2c-a7c1-152960cde2b0 This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0037 | 1 | 1.4218 | | 1.2281 | 0.0479 | 13 | 0.3763 | | 0.4243 | 0.0959 | 26 | 0.2314 | | 0.2714 | 0.1438 | 39 | 0.2008 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
LaughingLogits/AP-MAE-SC2-15B
LaughingLogits
2025-01-31T10:55:12Z
16
0
transformers
[ "transformers", "safetensors", "ap_mae", "endpoints_compatible", "region:us" ]
null
2024-08-04T20:39:43Z
--- library_name: transformers tags: [] --- # AP-MAE-SC2-15B This Model is currently anonymized during the paper review process. The AP-MAE transformer model design and configuration is available in the reproduction package attached to the submission This version of AP-MAE is trained on attention heads generated by StarCoder2-15B during inference. The inference task used for generating attention outputs is FiM token prediction for a random 3-10 length masked section of Java code, with exactly 256 tokens of surrounding context. # Usage: ``` from ap_mae import APMAE model = APMAE.from_pretrained( "LaughingLogits/AP-MAE-SC2-15B" ) ```
alchemist69/8c7e69d5-948b-485d-a961-c122d98c83da
alchemist69
2025-01-31T10:54:32Z
9
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-1.3b", "base_model:adapter:facebook/opt-1.3b", "license:other", "region:us" ]
null
2025-01-31T10:21:50Z
--- library_name: peft license: other base_model: facebook/opt-1.3b tags: - axolotl - generated_from_trainer model-index: - name: 8c7e69d5-948b-485d-a961-c122d98c83da results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-1.3b bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 07482fde303d400d_train_data.json ds_type: json format: custom path: /workspace/input_data/07482fde303d400d_train_data.json type: field_input: head field_instruction: relation field_output: tail format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: alchemist69/8c7e69d5-948b-485d-a961-c122d98c83da hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/07482fde303d400d_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 918e0db0-8fbf-4f91-ac15-ea8858c29f95 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 918e0db0-8fbf-4f91-ac15-ea8858c29f95 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 8c7e69d5-948b-485d-a961-c122d98c83da This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.1822 | 0.0001 | 1 | 3.9490 | | 1.9512 | 0.0054 | 50 | 2.2525 | | 2.413 | 0.0109 | 100 | 1.3108 | | 2.5112 | 0.0163 | 150 | 0.9324 | | 2.9385 | 0.0217 | 200 | 0.5557 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
batrider32/4f267d40-ba6b-49ab-b8e6-d970a3c9edc6
batrider32
2025-01-31T10:53:32Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-2-7b", "base_model:adapter:unsloth/llama-2-7b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T10:23:24Z
--- library_name: peft license: apache-2.0 base_model: unsloth/llama-2-7b tags: - axolotl - generated_from_trainer model-index: - name: 4f267d40-ba6b-49ab-b8e6-d970a3c9edc6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/llama-2-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 90e7595490c9c359_train_data.json ds_type: json format: custom path: /workspace/input_data/90e7595490c9c359_train_data.json type: field_input: Context field_instruction: Question field_output: Answers format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: batrider32/4f267d40-ba6b-49ab-b8e6-d970a3c9edc6 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/90e7595490c9c359_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 96f498e0-433b-497f-9217-797c42fe68c0 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 96f498e0-433b-497f-9217-797c42fe68c0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 4f267d40-ba6b-49ab-b8e6-d970a3c9edc6 This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.292 | 0.1125 | 200 | 0.3194 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
dixedus/e9bc50c5-eb3a-418b-a324-63494495a1b6
dixedus
2025-01-31T10:53:06Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-2-7b", "base_model:adapter:unsloth/llama-2-7b", "license:apache-2.0", "region:us" ]
null
2025-01-31T10:09:36Z
--- library_name: peft license: apache-2.0 base_model: unsloth/llama-2-7b tags: - axolotl - generated_from_trainer model-index: - name: e9bc50c5-eb3a-418b-a324-63494495a1b6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/llama-2-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 90e7595490c9c359_train_data.json ds_type: json format: custom path: /workspace/input_data/90e7595490c9c359_train_data.json type: field_input: Context field_instruction: Question field_output: Answers format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: dixedus/e9bc50c5-eb3a-418b-a324-63494495a1b6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/90e7595490c9c359_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 96f498e0-433b-497f-9217-797c42fe68c0 wandb_project: Gradients-On-Eight wandb_run: your_name wandb_runid: 96f498e0-433b-497f-9217-797c42fe68c0 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e9bc50c5-eb3a-418b-a324-63494495a1b6 This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0011 | 1 | 2.3854 | | 1.7252 | 0.0101 | 9 | 1.1801 | | 0.4464 | 0.0202 | 18 | 0.4427 | | 0.4004 | 0.0304 | 27 | 0.4017 | | 0.3786 | 0.0405 | 36 | 0.3725 | | 0.3623 | 0.0506 | 45 | 0.3516 | | 0.3217 | 0.0607 | 54 | 0.3377 | | 0.3226 | 0.0708 | 63 | 0.3276 | | 0.2732 | 0.0810 | 72 | 0.3245 | | 0.3439 | 0.0911 | 81 | 0.3177 | | 0.3153 | 0.1012 | 90 | 0.3159 | | 0.3049 | 0.1113 | 99 | 0.3156 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
havinash-ai/f0646650-653a-4fd5-892e-04b5872918ae
havinash-ai
2025-01-31T10:53:00Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:heegyu/WizardVicuna2-13b-hf", "base_model:adapter:heegyu/WizardVicuna2-13b-hf", "region:us" ]
null
2025-01-31T10:47:41Z
--- library_name: peft base_model: heegyu/WizardVicuna2-13b-hf tags: - axolotl - generated_from_trainer model-index: - name: f0646650-653a-4fd5-892e-04b5872918ae results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: heegyu/WizardVicuna2-13b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e87de6c43674e82c_train_data.json ds_type: json format: custom path: /workspace/input_data/e87de6c43674e82c_train_data.json type: field_input: ingredients field_instruction: method field_output: title format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: havinash-ai/f0646650-653a-4fd5-892e-04b5872918ae hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/e87de6c43674e82c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f139e0f5-a17c-4821-9823-884d644ea1bb wandb_project: Birthday-SN56-9-Gradients-On-Demand wandb_run: your_name wandb_runid: f139e0f5-a17c-4821-9823-884d644ea1bb warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f0646650-653a-4fd5-892e-04b5872918ae This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1685 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0006 | 1 | 2.9599 | | 2.2942 | 0.0084 | 13 | 1.4337 | | 1.3588 | 0.0168 | 26 | 1.2259 | | 1.1435 | 0.0252 | 39 | 1.1685 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
llm-jp/llm-jp-3-7.2b-instruct
llm-jp
2025-01-31T10:50:54Z
34
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "ja", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-12-02T02:02:31Z
--- license: apache-2.0 language: - en - ja programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation library_name: transformers inference: false --- # llm-jp-3-7.2b-instruct This repository provides large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/). For LLM-jp-3 models with different parameters, please refer to [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa) and [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731). Checkpoints format: Hugging Face Transformers ## Required Libraries and Their Versions - torch>=2.3.0 - transformers>=4.40.1 - tokenizers>=0.19.1 - accelerate>=0.29.3 - flash-attn>=2.5.8 ## Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-7.2b-instruct") model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-7.2b-instruct", device_map="auto", torch_dtype=torch.bfloat16) chat = [ {"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"}, {"role": "user", "content": "自然言語処理とは何か"}, ] tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device) with torch.no_grad(): output = model.generate( tokenized_input, max_new_tokens=100, do_sample=True, top_p=0.95, temperature=0.7, repetition_penalty=1.05, )[0] print(tokenizer.decode(output)) ``` ## Model Details - **Model type:** Transformer-based Language Model - **Total seen tokens:** 2.1T |Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters| |:---:|:---:|:---:|:---:|:---:|:---:|:---:| |1.8b|24|2048|16|4096|407,498,752|1,459,718,144| |3.7b|28|3072|24|4096|611,248,128|3,171,068,928| |7.2b|32|4096|32|4096|814,997,504|6,476,271,616| |13b|40|5120|40|4096|1,018,746,880|12,688,184,320| |172b|96|12288|96|4096|2,444,992,512|169,947,181,056| ## Tokenizer The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model. The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2). Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary). ## Datasets ### Pre-training The models have been pre-trained using a blend of the following datasets. | Language | Dataset | Tokens| |:---|:---|---:| |Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B ||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B ||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B ||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B ||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B |English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B ||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B ||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B ||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B ||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B ||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B ||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B |Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B |Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B |Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B ### Instruction tuning The models have been fine-tuned on the following datasets. | Language | Dataset | description | |:---|:---|:---| |Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset | | |[answer-carefully-002](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed instruction dataset focusing on LLMs' safety | | |ichikara-instruction-format| A small amount of instruction dataset edited from ichikara-instruction, with some constraints on the output format. | | |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. | | |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. | | |[wizardlm8x22b-logical-math-coding-sft_additional-ja](https://huggingface.co/datasets/kanhatakeyama/wizardlm8x22b-logical-math-coding-sft_additional-ja)| A synthetic instruction dataset. | | |[Synthetic-JP-EN-Coding-Dataset-567k](https://huggingface.co/datasets/Aratako/Synthetic-JP-EN-Coding-Dataset-567k)| A synthetic instruction dataset. We used sampled one.| |English |[FLAN](https://huggingface.co/datasets/Open-Orca/FLAN) | We used sampled one. | ## Evaluation ### llm-jp-eval (v1.3.1) We evaluated the models using 100 examples from the dev split. | Model name | average | EL | FA | HE | MC | MR | MT | NLI | QA | RC | | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | | [llm-jp-3-1.8b](https://huggingface.co/llm-jp/llm-jp-3-1.8b) | 0.3767 | 0.3725 | 0.1948 | 0.2350 | 0.2500 | 0.0900 | 0.7730 | 0.3080 | 0.4629 | 0.7040 | | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) | 0.4596 | 0.4280 | 0.1987 | 0.3250 | 0.3300 | 0.4200 | 0.7900 | 0.3520 | 0.4698 | 0.8224 | | [llm-jp-3-3.7b](https://huggingface.co/llm-jp/llm-jp-3-3.7b) | 0.4231 | 0.3812 | 0.2440 | 0.2200 | 0.1900 | 0.3600 | 0.7947 | 0.3800 | 0.4688 | 0.7694 | | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) | 0.5188 | 0.4191 | 0.2504 | 0.3400 | 0.5000 | 0.5800 | 0.8166 | 0.4500 | 0.4881 | 0.8247 | | [llm-jp-3-7.2b](https://huggingface.co/llm-jp/llm-jp-3-7.2b) | 0.5057 | 0.4062 | 0.2678 | 0.3450 | 0.5800 | 0.4300 | 0.8083 | 0.3480 | 0.5528 | 0.8136 | | [llm-jp-3-7.2b-instruct](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct) | 0.5888 | 0.4282 | 0.2659 | 0.4350 | 0.8900 | 0.5800 | 0.8250 | 0.4860 | 0.5565 | 0.8330 | | [llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) | 0.5802 | 0.5570 | 0.2593 | 0.4600 | 0.7000 | 0.6300 | 0.8292 | 0.3460 | 0.5937 | 0.8469 | | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) | 0.6168 | 0.5408 | 0.2757 | 0.4950 | 0.9200 | 0.7100 | 0.8317 | 0.4640 | 0.4642 | 0.8500 | ### Japanese MT Bench We evaluated the models using `gpt-4-0613`. Please see the [codes](https://github.com/llm-jp/llm-leaderboard/tree/main) for details. | Model name | average | coding | extraction | humanities | math | reasoning | roleplay | stem | writing | | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) | 4.93 | 1.50 | 4.70 | 7.80 | 1.55 | 2.60 | 7.80 | 6.10 | 7.40 | | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) | 5.50 | 1.95 | 4.05 | 8.25 | 2.25 | 4.00 | 8.80 | 7.25 | 7.45 | | [llm-jp-3-7.2b-instruct](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct) | 5.70 | 2.95 | 5.60 | 7.95 | 2.80 | 3.90 | 8.40 | 6.15 | 7.85 | | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) | 6.47 | 3.15 | 7.05 | 9.15 | 3.75 | 5.40 | 8.30 | 7.50 | 7.45 | ## Risks and Limitations The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. ## Send Questions to llm-jp(at)nii.ac.jp ## License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Model Card Authors *The names are listed in alphabetical order.* Hirokazu Kiyomaru and Takashi Kodama.
roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q3_K_M-GGUF
roleplaiapp
2025-01-31T10:48:10Z
6
0
transformers
[ "transformers", "gguf", "3-bit", "32b", "Q3_K_M", "deekseekr1", "fuseo1", "llama-cpp", "preview", "qwq", "skyt1", "text-generation", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T10:47:09Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - 32b - Q3_K_M - deekseekr1 - fuseo1 - gguf - llama-cpp - preview - qwq - skyt1 - text-generation --- # roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q3_K_M-GGUF **Repo:** `roleplaiapp/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q3_K_M-GGUF` **Original Model:** `FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview` **Quantized File:** `FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
blood34/1e4fbaf9-4670-42f3-b8d7-a8e0456d42b3
blood34
2025-01-31T10:47:52Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-2-7b", "base_model:adapter:unsloth/llama-2-7b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T10:16:49Z
--- library_name: peft license: apache-2.0 base_model: unsloth/llama-2-7b tags: - axolotl - generated_from_trainer model-index: - name: 1e4fbaf9-4670-42f3-b8d7-a8e0456d42b3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/llama-2-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 90e7595490c9c359_train_data.json ds_type: json format: custom path: /workspace/input_data/90e7595490c9c359_train_data.json type: field_input: Context field_instruction: Question field_output: Answers format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: blood34/1e4fbaf9-4670-42f3-b8d7-a8e0456d42b3 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/90e7595490c9c359_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 96f498e0-433b-497f-9217-797c42fe68c0 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 96f498e0-433b-497f-9217-797c42fe68c0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1e4fbaf9-4670-42f3-b8d7-a8e0456d42b3 This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2886 | 0.1125 | 200 | 0.3180 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
daniel40/50d3a381-d2fd-4942-8680-c6936e29fd55
daniel40
2025-01-31T10:45:07Z
8
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-1.3b", "base_model:adapter:facebook/opt-1.3b", "license:other", "region:us" ]
null
2025-01-31T10:22:26Z
--- library_name: peft license: other base_model: facebook/opt-1.3b tags: - axolotl - generated_from_trainer model-index: - name: 50d3a381-d2fd-4942-8680-c6936e29fd55 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-1.3b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 07482fde303d400d_train_data.json ds_type: json format: custom path: /workspace/input_data/07482fde303d400d_train_data.json type: field_input: head field_instruction: relation field_output: tail format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: daniel40/50d3a381-d2fd-4942-8680-c6936e29fd55 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/07482fde303d400d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 918e0db0-8fbf-4f91-ac15-ea8858c29f95 wandb_project: Birthday-SN56-31-Gradients-On-Demand wandb_run: your_name wandb_runid: 918e0db0-8fbf-4f91-ac15-ea8858c29f95 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 50d3a381-d2fd-4942-8680-c6936e29fd55 This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 3.5416 | | 4.6251 | 0.0014 | 50 | 0.9270 | | 2.8753 | 0.0027 | 100 | 0.6400 | | 1.9455 | 0.0041 | 150 | 0.5330 | | 1.6501 | 0.0054 | 200 | 0.4693 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
denbeo/c0250339-cf63-4bba-8749-65ddd25ff65c
denbeo
2025-01-31T10:39:49Z
6
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "base_model:adapter:EleutherAI/pythia-160m", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T10:36:14Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-160m tags: - axolotl - generated_from_trainer model-index: - name: c0250339-cf63-4bba-8749-65ddd25ff65c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-160m bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0c836c5745e5786f_train_data.json ds_type: json format: custom path: /workspace/input_data/0c836c5745e5786f_train_data.json type: field_instruction: text field_output: transcription_normalised format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: denbeo/c0250339-cf63-4bba-8749-65ddd25ff65c hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/0c836c5745e5786f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f58be090-bf7e-4790-9191-88ca31e26d50 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f58be090-bf7e-4790-9191-88ca31e26d50 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # c0250339-cf63-4bba-8749-65ddd25ff65c This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2626 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.0012 | 0.4317 | 200 | 1.2626 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF
ggml-org
2025-01-31T10:38:51Z
176
0
transformers
[ "transformers", "gguf", "code", "qwen", "qwen-coder", "codeqwen", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen2.5-Coder-0.5B", "base_model:quantized:Qwen/Qwen2.5-Coder-0.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T10:37:27Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B/blob/main/LICENSE language: - en base_model: Qwen/Qwen2.5-Coder-0.5B pipeline_tag: text-generation library_name: transformers tags: - code - qwen - qwen-coder - codeqwen - llama-cpp - gguf-my-repo --- # ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-0.5B`](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-q8_0.gguf -c 2048 ```
bluesky49/sn21_31JAN_11_30
bluesky49
2025-01-31T10:38:50Z
24
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-01-31T10:30:10Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
datlaaaaaaa/05407ba9-60a2-41e0-bb8a-eb257e87657d
datlaaaaaaa
2025-01-31T10:38:43Z
6
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "base_model:adapter:EleutherAI/pythia-160m", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T10:36:02Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-160m tags: - axolotl - generated_from_trainer model-index: - name: 05407ba9-60a2-41e0-bb8a-eb257e87657d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-160m bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0c836c5745e5786f_train_data.json ds_type: json format: custom path: /workspace/input_data/0c836c5745e5786f_train_data.json type: field_instruction: text field_output: transcription_normalised format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: datlaaaaaaa/05407ba9-60a2-41e0-bb8a-eb257e87657d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/0c836c5745e5786f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f58be090-bf7e-4790-9191-88ca31e26d50 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f58be090-bf7e-4790-9191-88ca31e26d50 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 05407ba9-60a2-41e0-bb8a-eb257e87657d This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.8965 | 0.4317 | 200 | 1.1762 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Best000/6d1f97ab-2e40-4ce5-a565-49eac9acb8bf
Best000
2025-01-31T10:38:04Z
6
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2b-it", "base_model:adapter:unsloth/gemma-2b-it", "license:apache-2.0", "region:us" ]
null
2025-01-31T10:34:43Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-2b-it tags: - axolotl - generated_from_trainer model-index: - name: 6d1f97ab-2e40-4ce5-a565-49eac9acb8bf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - b4a69513993621b7_train_data.json ds_type: json format: custom path: /workspace/input_data/b4a69513993621b7_train_data.json type: field_input: output field_instruction: instruction field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/6d1f97ab-2e40-4ce5-a565-49eac9acb8bf hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/b4a69513993621b7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c093f3d1-4356-46fe-b57f-880a4041af51 wandb_project: Birthday-SN56-15-Gradients-On-Demand wandb_run: your_name wandb_runid: c093f3d1-4356-46fe-b57f-880a4041af51 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 6d1f97ab-2e40-4ce5-a565-49eac9acb8bf This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 6.0689 | | 4.6252 | 0.0038 | 13 | 3.6940 | | 3.7648 | 0.0075 | 26 | 3.1918 | | 3.0675 | 0.0113 | 39 | 2.9457 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Best000/dbd4e278-1fd5-4ba2-9085-3b4f276159e4
Best000
2025-01-31T10:37:26Z
6
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2b-it", "base_model:adapter:unsloth/gemma-2b-it", "license:apache-2.0", "region:us" ]
null
2025-01-31T10:34:03Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-2b-it tags: - axolotl - generated_from_trainer model-index: - name: dbd4e278-1fd5-4ba2-9085-3b4f276159e4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - b4a69513993621b7_train_data.json ds_type: json format: custom path: /workspace/input_data/b4a69513993621b7_train_data.json type: field_input: output field_instruction: instruction field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/dbd4e278-1fd5-4ba2-9085-3b4f276159e4 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/b4a69513993621b7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c093f3d1-4356-46fe-b57f-880a4041af51 wandb_project: Birthday-SN56-32-Gradients-On-Demand wandb_run: your_name wandb_runid: c093f3d1-4356-46fe-b57f-880a4041af51 warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # dbd4e278-1fd5-4ba2-9085-3b4f276159e4 This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 6.0689 | | 5.0467 | 0.0038 | 13 | 5.6447 | | 4.919 | 0.0075 | 26 | 3.8314 | | 3.6456 | 0.0113 | 39 | 3.1476 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
smorce/Qwen2.5-Coder-32B-Instruct-karakuri-thinking-slerp-AWQ
smorce
2025-01-31T10:36:38Z
18
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "ja", "dataset:izumi-lab/wikipedia-ja-20230720", "base_model:smorce/Qwen2.5-Coder-32B-Instruct-karakuri-thinking-slerp", "base_model:quantized:smorce/Qwen2.5-Coder-32B-Instruct-karakuri-thinking-slerp", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2025-01-30T19:45:17Z
--- license: apache-2.0 language: - en - ja datasets: - izumi-lab/wikipedia-ja-20230720 base_model: - smorce/Qwen2.5-Coder-32B-Instruct-karakuri-thinking-slerp library_name: transformers --- # karakuri-lm-32b-thinking-2501-exp-AWQ [カラクリ様が公開されている karakuri-lm-32b-thinking-2501-exp](https://huggingface.co/karakuri-ai/karakuri-lm-32b-thinking-2501-exp) と [Qwenチームが公開されている Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) をマージし、それを AWQ 4bit で量子化したモデルになります。 キャリブレーション用データセットは [izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720) を使用しました。<br> ※TFMC/imatrix-dataset-for-japanese-llm ではございません。 量子化前のモデルとマージ設定は以下の通りです:<br> [Qwen2.5-Coder-32B-Instruct-karakuri-thinking-slerp](https://huggingface.co/smorce/Qwen2.5-Coder-32B-Instruct-karakuri-thinking-slerp) ## 作成意図 日本語のReasoningモデルにコーディング能力を付与する目的で作成しました。 ## 量子化の設定 ``` quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" } ``` このモデルは崩壊してしまい、失敗でした。
shibajustfor/b763e489-8a86-4478-a7a4-9bc307395ffe
shibajustfor
2025-01-31T10:35:49Z
6
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-1.3b", "base_model:adapter:facebook/opt-1.3b", "license:other", "region:us" ]
null
2025-01-31T10:21:57Z
--- library_name: peft license: other base_model: facebook/opt-1.3b tags: - axolotl - generated_from_trainer model-index: - name: b763e489-8a86-4478-a7a4-9bc307395ffe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-1.3b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 07482fde303d400d_train_data.json ds_type: json format: custom path: /workspace/input_data/07482fde303d400d_train_data.json type: field_input: head field_instruction: relation field_output: tail format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: shibajustfor/b763e489-8a86-4478-a7a4-9bc307395ffe hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/07482fde303d400d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 918e0db0-8fbf-4f91-ac15-ea8858c29f95 wandb_project: Birthday-SN56-39-Gradients-On-Demand wandb_run: your_name wandb_runid: 918e0db0-8fbf-4f91-ac15-ea8858c29f95 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b763e489-8a86-4478-a7a4-9bc307395ffe This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 3.7376 | | 12.5855 | 0.0004 | 13 | 1.9450 | | 8.1372 | 0.0007 | 26 | 1.5056 | | 6.0118 | 0.0011 | 39 | 1.3243 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Primeness/primeh1v12c2
Primeness
2025-01-31T10:33:58Z
39
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T10:01:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
robiulawaldev/329fb5d1-ec3d-4947-af3b-8ac00e7ebbf8
robiulawaldev
2025-01-31T10:31:25Z
6
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-1.3b", "base_model:adapter:facebook/opt-1.3b", "license:other", "region:us" ]
null
2025-01-31T10:22:09Z
--- library_name: peft license: other base_model: facebook/opt-1.3b tags: - axolotl - generated_from_trainer model-index: - name: 329fb5d1-ec3d-4947-af3b-8ac00e7ebbf8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-1.3b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 07482fde303d400d_train_data.json ds_type: json format: custom path: /workspace/input_data/07482fde303d400d_train_data.json type: field_input: head field_instruction: relation field_output: tail format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/329fb5d1-ec3d-4947-af3b-8ac00e7ebbf8 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: constant max_steps: 55 micro_batch_size: 4 mlflow_experiment_name: /tmp/07482fde303d400d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 918e0db0-8fbf-4f91-ac15-ea8858c29f95 wandb_project: Birthday-SN56-37-Gradients-On-Demand wandb_run: your_name wandb_runid: 918e0db0-8fbf-4f91-ac15-ea8858c29f95 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 329fb5d1-ec3d-4947-af3b-8ac00e7ebbf8 This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 55 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 2.9942 | | 4.3188 | 0.0004 | 14 | 1.3851 | | 3.0073 | 0.0008 | 28 | 0.9632 | | 1.8619 | 0.0011 | 42 | 0.7833 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
EEKN/my-awesome-model
EEKN
2025-01-31T10:29:51Z
16
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-01-31T08:28:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arcwarden46/f72382c8-9a89-40d8-a344-74d0916e69d0
arcwarden46
2025-01-31T10:27:35Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "license:other", "region:us" ]
null
2025-01-31T09:20:08Z
--- library_name: peft license: other base_model: huggyllama/llama-7b tags: - axolotl - generated_from_trainer model-index: - name: f72382c8-9a89-40d8-a344-74d0916e69d0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: huggyllama/llama-7b bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 374d415fa346ac2b_train_data.json ds_type: json format: custom path: /workspace/input_data/374d415fa346ac2b_train_data.json type: field_input: prompt_setting field_instruction: prompt field_output: completion format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: arcwarden46/f72382c8-9a89-40d8-a344-74d0916e69d0 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/374d415fa346ac2b_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4f915252-86ce-4fac-8a8b-ab5ecbcf4eac wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 4f915252-86ce-4fac-8a8b-ab5ecbcf4eac warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f72382c8-9a89-40d8-a344-74d0916e69d0 This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0724 | 0.0005 | 1 | 2.6803 | | 0.3222 | 0.0251 | 50 | 0.8856 | | 0.2416 | 0.0502 | 100 | 0.8867 | | 0.2069 | 0.0754 | 150 | 0.4074 | | 0.1 | 0.1005 | 200 | 0.3373 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Nexspear/d5664b46-b2f2-47b2-8874-0e902edce97b
Nexspear
2025-01-31T10:27:10Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T04:46:49Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: d5664b46-b2f2-47b2-8874-0e902edce97b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5fb110e3c74c3130_train_data.json ds_type: json format: custom path: /workspace/input_data/5fb110e3c74c3130_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: Nexspear/d5664b46-b2f2-47b2-8874-0e902edce97b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/5fb110e3c74c3130_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 5cf40287-99df-483d-bba9-4777509422cc wandb_project: Gradients-On-Four wandb_run: your_name wandb_runid: 5cf40287-99df-483d-bba9-4777509422cc warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d5664b46-b2f2-47b2-8874-0e902edce97b This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 0.7973 | | 0.7843 | 0.0011 | 9 | 0.7085 | | 0.5868 | 0.0021 | 18 | 0.6072 | | 0.5715 | 0.0032 | 27 | 0.5835 | | 0.594 | 0.0042 | 36 | 0.5704 | | 0.5997 | 0.0053 | 45 | 0.5625 | | 0.5675 | 0.0063 | 54 | 0.5570 | | 0.5488 | 0.0074 | 63 | 0.5535 | | 0.5726 | 0.0084 | 72 | 0.5510 | | 0.535 | 0.0095 | 81 | 0.5496 | | 0.5254 | 0.0105 | 90 | 0.5489 | | 0.5629 | 0.0116 | 99 | 0.5487 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
krtk00/generic_id_lora
krtk00
2025-01-31T10:24:59Z
6
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T10:24:56Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: GENERICID --- # Generic_Id_Lora <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `GENERICID` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('krtk00/generic_id_lora', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
auxyus/e4bb7e28-e1bd-417f-9388-76144c406720
auxyus
2025-01-31T10:20:27Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2025-01-31T08:20:40Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: e4bb7e28-e1bd-417f-9388-76144c406720 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: auxyus/e4bb7e28-e1bd-417f-9388-76144c406720 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: Gradients-On-Two wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e4bb7e28-e1bd-417f-9388-76144c406720 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7066 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 1.1927 | | 3.9578 | 0.0020 | 9 | 0.9265 | | 3.1272 | 0.0040 | 18 | 0.7899 | | 2.9513 | 0.0061 | 27 | 0.7560 | | 2.8266 | 0.0081 | 36 | 0.7394 | | 2.7893 | 0.0101 | 45 | 0.7278 | | 2.8612 | 0.0121 | 54 | 0.7208 | | 2.9883 | 0.0142 | 63 | 0.7154 | | 2.8016 | 0.0162 | 72 | 0.7107 | | 2.8385 | 0.0182 | 81 | 0.7084 | | 2.7738 | 0.0202 | 90 | 0.7070 | | 2.7729 | 0.0223 | 99 | 0.7066 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-IQ4_XS-GGUF
roleplaiapp
2025-01-31T10:20:16Z
41
0
transformers
[ "transformers", "gguf", "14b", "IQ4_XS", "cyberagent", "deepseek", "distill", "iq4", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T10:19:43Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - IQ4_XS - cyberagent - deepseek - distill - gguf - iq4 - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-IQ4_XS-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-IQ4_XS-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-IQ4_XS.gguf` **Quantization:** `GGUF` **Quantization Method:** `IQ4_XS` ## Overview This is a GGUF IQ4_XS quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
lesso17/06c3acb1-7fa6-49cb-94fb-41bee1f3e6c9
lesso17
2025-01-31T10:19:30Z
14
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:57:00Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 06c3acb1-7fa6-49cb-94fb-41bee1f3e6c9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 datasets: - data_files: - 347a92d23534ee1f_train_data.json ds_type: json format: custom path: /workspace/input_data/347a92d23534ee1f_train_data.json type: field_instruction: user field_output: assistant format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso17/06c3acb1-7fa6-49cb-94fb-41bee1f3e6c9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/347a92d23534ee1f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a wandb_project: new-01-29 wandb_run: your_name wandb_runid: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 06c3acb1-7fa6-49cb-94fb-41bee1f3e6c9 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.3705 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-IQ3_XS-GGUF
roleplaiapp
2025-01-31T10:19:00Z
19
0
transformers
[ "transformers", "gguf", "14b", "IQ3_XS", "cyberagent", "deepseek", "distill", "iq3", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T10:18:32Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - IQ3_XS - cyberagent - deepseek - distill - gguf - iq3 - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-IQ3_XS-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-IQ3_XS-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-IQ3_XS.gguf` **Quantization:** `GGUF` **Quantization Method:** `IQ3_XS` ## Overview This is a GGUF IQ3_XS quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
fifxus/6024c179-d282-4dcb-864b-5b7a1be5dece
fifxus
2025-01-31T10:17:43Z
6
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:18:56Z
--- library_name: peft license: apache-2.0 base_model: beomi/polyglot-ko-12.8b-safetensors tags: - axolotl - generated_from_trainer model-index: - name: 6024c179-d282-4dcb-864b-5b7a1be5dece results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: beomi/polyglot-ko-12.8b-safetensors bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5b40f032b30685c2_train_data.json ds_type: json format: custom path: /workspace/input_data/5b40f032b30685c2_train_data.json type: field_input: Context field_instruction: Claim field_output: Inconsistent Context-Span format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: true hub_model_id: fifxus/6024c179-d282-4dcb-864b-5b7a1be5dece hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/5b40f032b30685c2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 0ab2615f-56b6-4b59-a90d-a16528f4cf17 wandb_project: Gradients-On-10 wandb_run: your_name wandb_runid: 0ab2615f-56b6-4b59-a90d-a16528f4cf17 warmup_steps: 5 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 6024c179-d282-4dcb-864b-5b7a1be5dece This model is a fine-tuned version of [beomi/polyglot-ko-12.8b-safetensors](https://huggingface.co/beomi/polyglot-ko-12.8b-safetensors) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2880 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9539 | 0.2117 | 200 | 0.2880 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
myhaaaaaaa/2351c3ba-b2da-44be-ad73-9f4b236f0bb8
myhaaaaaaa
2025-01-31T10:15:24Z
13
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:57:07Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 2351c3ba-b2da-44be-ad73-9f4b236f0bb8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 347a92d23534ee1f_train_data.json ds_type: json format: custom path: /workspace/input_data/347a92d23534ee1f_train_data.json type: field_instruction: user field_output: assistant format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: myhaaaaaaa/2351c3ba-b2da-44be-ad73-9f4b236f0bb8 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/347a92d23534ee1f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 2351c3ba-b2da-44be-ad73-9f4b236f0bb8 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1433 | 0.3705 | 200 | 2.3006 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q5_K_S-GGUF
roleplaiapp
2025-01-31T10:15:21Z
10
0
transformers
[ "transformers", "gguf", "14b", "5-bit", "Q5_K_S", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T10:14:41Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 5-bit - Q5_K_S - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q5_K_S-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q5_K_S-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-Q5_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q5_K_S` ## Overview This is a GGUF Q5_K_S quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
abaddon182/422b120a-54c0-4b41-bb13-cf38e6c01f76
abaddon182
2025-01-31T10:15:07Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-01-31T10:14:33Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: 422b120a-54c0-4b41-bb13-cf38e6c01f76 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fee8d932af6f9203_train_data.json ds_type: json format: custom path: /workspace/input_data/fee8d932af6f9203_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: abaddon182/422b120a-54c0-4b41-bb13-cf38e6c01f76 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/fee8d932af6f9203_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 422b120a-54c0-4b41-bb13-cf38e6c01f76 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3729 | 0.0129 | 1 | 10.3791 | | 10.3421 | 0.6431 | 50 | 10.3446 | | 10.3764 | 1.2926 | 100 | 10.3387 | | 10.3379 | 1.9357 | 150 | 10.3383 | | 11.0988 | 2.5852 | 200 | 10.3382 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q4_K_S-GGUF
roleplaiapp
2025-01-31T10:12:33Z
5
0
transformers
[ "transformers", "gguf", "14b", "4-bit", "Q4_K_S", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T10:11:59Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 4-bit - Q4_K_S - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q4_K_S-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q4_K_S-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-Q4_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q4_K_S` ## Overview This is a GGUF Q4_K_S quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
lesso10/38cac83b-96f1-4d90-b4d5-c34c58ba5cfd
lesso10
2025-01-31T10:11:54Z
8
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Mistral-7b-128k", "base_model:adapter:NousResearch/Yarn-Mistral-7b-128k", "license:apache-2.0", "region:us" ]
null
2025-01-31T08:45:18Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Mistral-7b-128k tags: - axolotl - generated_from_trainer model-index: - name: 38cac83b-96f1-4d90-b4d5-c34c58ba5cfd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Yarn-Mistral-7b-128k bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 248079f476a07bc3_train_data.json ds_type: json format: custom path: /workspace/input_data/248079f476a07bc3_train_data.json type: field_instruction: problem field_output: qwq format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso10/38cac83b-96f1-4d90-b4d5-c34c58ba5cfd hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/248079f476a07bc3_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e6be45b1-93a3-491a-ac21-d779477a89fc wandb_project: new-01-29 wandb_run: your_name wandb_runid: e6be45b1-93a3-491a-ac21-d779477a89fc warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 38cac83b-96f1-4d90-b4d5-c34c58ba5cfd This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5094 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0988 | 0.0286 | 200 | 0.5094 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nblinh63/5320353f-5fcd-4e5d-a9ac-2f3ef9223543
nblinh63
2025-01-31T10:10:43Z
13
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:56:59Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 5320353f-5fcd-4e5d-a9ac-2f3ef9223543 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 347a92d23534ee1f_train_data.json ds_type: json format: custom path: /workspace/input_data/347a92d23534ee1f_train_data.json type: field_instruction: user field_output: assistant format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nblinh63/5320353f-5fcd-4e5d-a9ac-2f3ef9223543 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/347a92d23534ee1f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 5320353f-5fcd-4e5d-a9ac-2f3ef9223543 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1476 | 0.3705 | 200 | 2.2991 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
blood34/5612c9d0-ecaf-448c-91eb-c8208541dcc3
blood34
2025-01-31T10:10:23Z
16
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:57:06Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 5612c9d0-ecaf-448c-91eb-c8208541dcc3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 347a92d23534ee1f_train_data.json ds_type: json format: custom path: /workspace/input_data/347a92d23534ee1f_train_data.json type: field_instruction: user field_output: assistant format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: blood34/5612c9d0-ecaf-448c-91eb-c8208541dcc3 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/347a92d23534ee1f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5612c9d0-ecaf-448c-91eb-c8208541dcc3 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9628 | 0.7407 | 200 | 2.1117 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/Multi_SFT_8B-GGUF
mradermacher
2025-01-31T10:10:20Z
261
0
transformers
[ "transformers", "gguf", "en", "base_model:rl-llm-coders/Multi_SFT_8B", "base_model:quantized:rl-llm-coders/Multi_SFT_8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T08:27:01Z
--- base_model: rl-llm-coders/Multi_SFT_8B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/rl-llm-coders/Multi_SFT_8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Multi_SFT_8B-GGUF/resolve/main/Multi_SFT_8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_S-GGUF
roleplaiapp
2025-01-31T10:09:55Z
5
0
transformers
[ "transformers", "gguf", "14b", "3-bit", "Q3_K_S", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T10:09:29Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 3-bit - Q3_K_S - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_S-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_S-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-Q3_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_S` ## Overview This is a GGUF Q3_K_S quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
zkkdinx/DeepSeek-R1-Distill-Qwen-7B-Q3_K_M-GGUF
zkkdinx
2025-01-31T10:09:54Z
10
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T10:09:35Z
--- license: mit library_name: transformers base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B tags: - llama-cpp - gguf-my-repo --- # zkkdinx/DeepSeek-R1-Distill-Qwen-7B-Q3_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo zkkdinx/DeepSeek-R1-Distill-Qwen-7B-Q3_K_M-GGUF --hf-file deepseek-r1-distill-qwen-7b-q3_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo zkkdinx/DeepSeek-R1-Distill-Qwen-7B-Q3_K_M-GGUF --hf-file deepseek-r1-distill-qwen-7b-q3_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo zkkdinx/DeepSeek-R1-Distill-Qwen-7B-Q3_K_M-GGUF --hf-file deepseek-r1-distill-qwen-7b-q3_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo zkkdinx/DeepSeek-R1-Distill-Qwen-7B-Q3_K_M-GGUF --hf-file deepseek-r1-distill-qwen-7b-q3_k_m.gguf -c 2048 ```
nhung01/73173035-8276-4be8-85c9-1cf4423ed441
nhung01
2025-01-31T10:09:16Z
13
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:57:14Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 73173035-8276-4be8-85c9-1cf4423ed441 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 347a92d23534ee1f_train_data.json ds_type: json format: custom path: /workspace/input_data/347a92d23534ee1f_train_data.json type: field_instruction: user field_output: assistant format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhung01/73173035-8276-4be8-85c9-1cf4423ed441 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/347a92d23534ee1f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 73173035-8276-4be8-85c9-1cf4423ed441 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1398 | 0.3705 | 200 | 2.2982 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_M-GGUF
roleplaiapp
2025-01-31T10:08:45Z
5
0
transformers
[ "transformers", "gguf", "14b", "3-bit", "Q3_K_M", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T10:08:15Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 3-bit - Q3_K_M - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_M-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_M-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
nhoxinh/3dff4aff-496d-4423-9263-42f395c53796
nhoxinh
2025-01-31T10:08:26Z
13
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:57:06Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 3dff4aff-496d-4423-9263-42f395c53796 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 347a92d23534ee1f_train_data.json ds_type: json format: custom path: /workspace/input_data/347a92d23534ee1f_train_data.json type: field_instruction: user field_output: assistant format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhoxinh/3dff4aff-496d-4423-9263-42f395c53796 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/347a92d23534ee1f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 3dff4aff-496d-4423-9263-42f395c53796 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1488 | 0.3705 | 200 | 2.2975 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ysaugle/akshay_flux
ysaugle
2025-01-31T10:07:56Z
25
1
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T05:13:35Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: AKSHAY --- # Akshay_Flux <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `AKSHAY` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ysaugle/akshay_flux', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
nhung02/164e2166-9045-49a0-801e-d1f4224a02e5
nhung02
2025-01-31T10:07:39Z
13
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:57:10Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 164e2166-9045-49a0-801e-d1f4224a02e5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 347a92d23534ee1f_train_data.json ds_type: json format: custom path: /workspace/input_data/347a92d23534ee1f_train_data.json type: field_instruction: user field_output: assistant format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhung02/164e2166-9045-49a0-801e-d1f4224a02e5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/347a92d23534ee1f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 164e2166-9045-49a0-801e-d1f4224a02e5 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1351 | 0.3705 | 200 | 2.2970 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_L-GGUF
roleplaiapp
2025-01-31T10:07:30Z
5
0
transformers
[ "transformers", "gguf", "14b", "3-bit", "Q3_K_L", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T10:06:58Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 3-bit - Q3_K_L - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_L-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q3_K_L-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-Q3_K_L.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_L` ## Overview This is a GGUF Q3_K_L quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
cunghoctienganh/983ec172-27bf-4fb1-a811-a6cdae0125ef
cunghoctienganh
2025-01-31T10:06:06Z
10
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:53:55Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 983ec172-27bf-4fb1-a811-a6cdae0125ef results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 35e8b0d0959cde6a_train_data.json ds_type: json format: custom path: /workspace/input_data/35e8b0d0959cde6a_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: cunghoctienganh/983ec172-27bf-4fb1-a811-a6cdae0125ef hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/35e8b0d0959cde6a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 983ec172-27bf-4fb1-a811-a6cdae0125ef This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4442 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5216 | 0.3177 | 200 | 4.4442 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
thaffggg/8cd9e8d3-f604-4f79-8400-04ab2fef028f
thaffggg
2025-01-31T10:05:49Z
10
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:53:54Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 8cd9e8d3-f604-4f79-8400-04ab2fef028f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 35e8b0d0959cde6a_train_data.json ds_type: json format: custom path: /workspace/input_data/35e8b0d0959cde6a_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: thaffggg/8cd9e8d3-f604-4f79-8400-04ab2fef028f hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/35e8b0d0959cde6a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 8cd9e8d3-f604-4f79-8400-04ab2fef028f This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5295 | 0.3177 | 200 | 4.4460 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
batrider32/37cdf396-a2e6-4150-84e0-175a6ca14c24
batrider32
2025-01-31T10:05:47Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:20:23Z
--- library_name: peft license: other base_model: huggyllama/llama-7b tags: - axolotl - generated_from_trainer model-index: - name: 37cdf396-a2e6-4150-84e0-175a6ca14c24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: huggyllama/llama-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 374d415fa346ac2b_train_data.json ds_type: json format: custom path: /workspace/input_data/374d415fa346ac2b_train_data.json type: field_input: prompt_setting field_instruction: prompt field_output: completion format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: batrider32/37cdf396-a2e6-4150-84e0-175a6ca14c24 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/374d415fa346ac2b_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4f915252-86ce-4fac-8a8b-ab5ecbcf4eac wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 4f915252-86ce-4fac-8a8b-ab5ecbcf4eac warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 37cdf396-a2e6-4150-84e0-175a6ca14c24 This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2986 | 0.0502 | 200 | 0.3840 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
minhtrannnn/e042205c-657a-456a-96ea-14cbcefd0741
minhtrannnn
2025-01-31T10:04:58Z
12
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:53:53Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: e042205c-657a-456a-96ea-14cbcefd0741 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 35e8b0d0959cde6a_train_data.json ds_type: json format: custom path: /workspace/input_data/35e8b0d0959cde6a_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: minhtrannnn/e042205c-657a-456a-96ea-14cbcefd0741 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/35e8b0d0959cde6a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # e042205c-657a-456a-96ea-14cbcefd0741 This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5475 | 0.3177 | 200 | 4.4440 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q6_K-GGUF
roleplaiapp
2025-01-31T10:04:32Z
12
0
transformers
[ "transformers", "gguf", "14b", "6-bit", "Q6_K", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T10:03:45Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 6-bit - Q6_K - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q6_K-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q6_K-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-Q6_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q6_K` ## Overview This is a GGUF Q6_K quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
altomek/RE-70B-AS3D
altomek
2025-01-31T10:04:01Z
17
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "conversational", "en", "base_model:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:SillyTilly/Meta-Llama-3.1-70B", "base_model:merge:SillyTilly/Meta-Llama-3.1-70B", "base_model:SillyTilly/Meta-Llama-3.1-70B-Instruct", "base_model:merge:SillyTilly/Meta-Llama-3.1-70B-Instruct", "base_model:unsloth/Llama-3.3-70B-Instruct", "base_model:merge:unsloth/Llama-3.3-70B-Instruct", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-18T22:29:13Z
--- language: - en license: llama3 library_name: transformers tags: - merge base_model: - SillyTilly/Meta-Llama-3.1-70B - SillyTilly/Meta-Llama-3.1-70B-Instruct - unsloth/Llama-3.3-70B-Instruct - SicariusSicariiStuff/Negative_LLAMA_70B --- # <img src=https://huggingface.co/altomek/RE-70B-AS3D/resolve/main/RE.png> <a href="https://www.youtube.com/watch?v=kYje-wdAUsg" title="i_o - Audio Dust" target="_blank">intro music...</a> ## Llama RE-70B-AS3D I desired a model that would unlock full Llama personality but still could follow instructions. This is first interesting result from the voyage... ### Ingridients - [Llama-3.1-70B](https://huggingface.co/SillyTilly/Meta-Llama-3.1-70B) - [Llama-3.1-70B-Instruct](https://huggingface.co/SillyTilly/Meta-Llama-3.1-70B-Instruct) - [Llama-3.3-70B-Instruct](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct) - [Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) ### Settings Use Llama 3 template. ### Quants - [GGUF](https://huggingface.co/altomek/RE-70B-AS3D-GGUF) --> UPLOADING! - [3 BPW](https://huggingface.co/altomek/RE-70B-AS3D-3bpw-EXL2) - [3.5 BPW](https://huggingface.co/altomek/RE-70B-AS3D-3.5bpw-EXL2) - [3.75 BPW](https://huggingface.co/altomek/RE-70B-AS3D-3.75bpw-EXL2) - [4 BPW](https://huggingface.co/altomek/RE-70B-AS3D-4bpw-EXL2) - [4.25 BPW](https://huggingface.co/altomek/RE-70B-AS3D-4.25bpw-EXL2)
dixedus/a1101d62-5d31-4897-8377-98ec4b2ea042
dixedus
2025-01-31T10:03:28Z
16
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T09:57:01Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: a1101d62-5d31-4897-8377-98ec4b2ea042 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 347a92d23534ee1f_train_data.json ds_type: json format: custom path: /workspace/input_data/347a92d23534ee1f_train_data.json type: field_instruction: user field_output: assistant format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: dixedus/a1101d62-5d31-4897-8377-98ec4b2ea042 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/347a92d23534ee1f_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a wandb_project: Gradients-On-Eight wandb_run: your_name wandb_runid: 0d4b8f4b-e49b-43b6-999c-7c75ab8cf01a warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a1101d62-5d31-4897-8377-98ec4b2ea042 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0074 | 1 | 3.7154 | | 3.5719 | 0.0667 | 9 | 3.5388 | | 3.1662 | 0.1333 | 18 | 2.8673 | | 2.6375 | 0.2 | 27 | 2.5019 | | 2.3038 | 0.2667 | 36 | 2.3341 | | 2.462 | 0.3333 | 45 | 2.2360 | | 2.3586 | 0.4 | 54 | 2.1743 | | 2.1036 | 0.4667 | 63 | 2.1324 | | 2.1387 | 0.5333 | 72 | 2.1069 | | 2.1381 | 0.6 | 81 | 2.0938 | | 2.1019 | 0.6667 | 90 | 2.0883 | | 2.073 | 0.7333 | 99 | 2.0868 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q2_K-GGUF
roleplaiapp
2025-01-31T10:03:01Z
14
0
transformers
[ "transformers", "gguf", "14b", "2-bit", "Q2_K", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T10:02:37Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 2-bit - Q2_K - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q2_K-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q2_K-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-Q2_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q2_K` ## Overview This is a GGUF Q2_K quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
hrasto/llamas3_childes_h
hrasto
2025-01-31T10:02:55Z
22
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T09:03:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nhunglaaaaaaa/0c932d44-fd3f-4eda-a317-7fc576a3a224
nhunglaaaaaaa
2025-01-31T10:01:52Z
10
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T09:53:48Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 0c932d44-fd3f-4eda-a317-7fc576a3a224 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 35e8b0d0959cde6a_train_data.json ds_type: json format: custom path: /workspace/input_data/35e8b0d0959cde6a_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhunglaaaaaaa/0c932d44-fd3f-4eda-a317-7fc576a3a224 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/35e8b0d0959cde6a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 0c932d44-fd3f-4eda-a317-7fc576a3a224 This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5229 | 0.3177 | 200 | 4.4426 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/63fd0687-b8b1-4c7a-8105-b12ea67aac4c
mrferr3t
2025-01-31T09:59:21Z
10
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-1.5B", "base_model:adapter:unsloth/Qwen2-1.5B", "license:apache-2.0", "region:us" ]
null
2025-01-31T09:58:11Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 63fd0687-b8b1-4c7a-8105-b12ea67aac4c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 35e8b0d0959cde6a_train_data.json ds_type: json format: custom path: /workspace/input_data/35e8b0d0959cde6a_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/63fd0687-b8b1-4c7a-8105-b12ea67aac4c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/35e8b0d0959cde6a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d3f4aa02-ae98-4a61-ba48-31b55d8d8ffe warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 63fd0687-b8b1-4c7a-8105-b12ea67aac4c This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.2161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.671 | 0.0016 | 1 | 5.5637 | | 4.4602 | 0.0794 | 50 | 4.2161 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-IQ4_XS-GGUF
roleplaiapp
2025-01-31T09:57:47Z
27
0
transformers
[ "transformers", "gguf", "32b", "IQ4_XS", "cyberagent", "deepseek", "distill", "iq4", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T09:56:45Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32b - IQ4_XS - cyberagent - deepseek - distill - gguf - iq4 - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-IQ4_XS-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-IQ4_XS-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-IQ4_XS.gguf` **Quantization:** `GGUF` **Quantization Method:** `IQ4_XS` ## Overview This is a GGUF IQ4_XS quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
friendshipkim/1b_instruct-ver2
friendshipkim
2025-01-31T09:56:51Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T09:35:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sleepdeprived3/Mistral-Small-24B-Instruct-2501_EXL2_8bpw_H8
sleepdeprived3
2025-01-31T09:56:44Z
17
0
vllm
[ "vllm", "safetensors", "mistral", "text-generation", "transformers", "conversational", "en", "fr", "de", "es", "it", "pt", "zh", "ja", "ru", "ko", "base_model:mistralai/Mistral-Small-24B-Base-2501", "base_model:quantized:mistralai/Mistral-Small-24B-Base-2501", "license:apache-2.0", "text-generation-inference", "8-bit", "exl2", "region:us" ]
text-generation
2025-01-31T08:30:35Z
--- language: - en - fr - de - es - it - pt - zh - ja - ru - ko license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Mistral-Small-24B-Base-2501 extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. tags: - transformers --- # Model Card for Mistral-Small-24B-Instruct-2501 Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501). Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized. Perfect for: - Fast response conversational agents. - Low latency function calling. - Subject matter experts via fine-tuning. - Local inference for hobbyists and organizations handling sensitive data. For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community. This release demonstrates our commitment to open source, serving as a strong base model. Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/). Model developper: Mistral AI Team ## Key Features - **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish. - **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting. - **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities. - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window:** A 32k context window. - **System Prompt:** Maintains strong adherence and support for system prompts. - **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark results ### Human evaluated benchmarks | Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini | |----------|-------------|--------------|---------------|------------| | Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 | | Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 | | Ties | 0.052 | 0.060 | 0.236 | 0.160 | | Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 | | Other is better | 0.156 | 0.172 | 0.296 | 0.312 | **Note**: - We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts. - Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model. - We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid. ### Publicly accesible benchmarks **Reasoning & Knowledge** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 | | gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 | **Math & Coding** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 | | math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 | **Instruction following** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 | | wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 | | arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 | | ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 | **Note**: - Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance ([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)). - Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13. ### Basic Instruct Template (V7-Tekken) ``` <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST] ``` *`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.* ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth*** ## Usage The model can be used with the following frameworks; - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vLLM) - [`transformers`](https://github.com/huggingface/transformers): See [here](#Transformers) ### vLLM We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt: ``` system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")""" ``` **_Installation_** Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4): ``` pip install --upgrade vllm ``` Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed: ``` pip install --upgrade mistral_common ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice ``` **Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. ```py import requests import json from datetime import datetime, timedelta url = "http://<your-server>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" messages = [ { "role": "system", "content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." }, { "role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French." }, ] data = {"model": model, "messages": messages} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Function calling Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Example</summary> ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to find the weather for, e.g. 'San Francisco'", }, "state": { "type": "string", "description": "The state abbreviation, e.g. 'CA' for California", }, "unit": { "type": "string", "description": "The unit for temperature", "enum": ["celsius", "fahrenheit"], }, }, "required": ["city", "state", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?", }, ] data = {"model": model, "messages": messages, "tools": tools} response = requests.post(url, headers=headers, data=json.dumps(data)) import ipdb; ipdb.set_trace() print(response.json()["choices"][0]["message"]["tool_calls"]) # [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}] ``` </details> #### Offline ```py from vllm import LLM from vllm.sampling_params import SamplingParams from datetime import datetime, timedelta SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." user_prompt = "Give me 5 non-formal ways to say 'See you later' in French." messages = [ { "role": "system", "content": SYSTEM_PROMPT }, { "role": "user", "content": user_prompt }, ] # note that running this model on GPU requires over 60 GB of GPU RAM llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=512, temperature=0.15) outputs = llm.chat(messages, sampling_params=sampling_params) print(outputs[0].outputs[0].text) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Transformers If you want to use Hugging Face transformers to generate text, you can do something like this. ```py from transformers import pipeline import torch messages = [ {"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16) chatbot(messages) ``` ### Ollama [Ollama](https://github.com/ollama/ollama) can run this model locally on MacOS, Windows and Linux. ``` ollama run mistral-small ``` 4-bit quantization (aliased to default): ``` ollama run mistral-small:24b-instruct-2501-q4_K_M ``` 8-bit quantization: ``` ollama run mistral-small:24b-instruct-2501-q8_0 ``` FP16: ``` ollama run mistral-small:24b-instruct-2501-fp16 ```
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-IQ3_XS-GGUF
roleplaiapp
2025-01-31T09:56:00Z
5
0
transformers
[ "transformers", "gguf", "32b", "IQ3_XS", "cyberagent", "deepseek", "distill", "iq3", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T09:55:07Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32b - IQ3_XS - cyberagent - deepseek - distill - gguf - iq3 - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-IQ3_XS-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-IQ3_XS-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-IQ3_XS.gguf` **Quantization:** `GGUF` **Quantization Method:** `IQ3_XS` ## Overview This is a GGUF IQ3_XS quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
fakezeta/DeepSeek-R1-Distill-Llama-8B-ov-int8
fakezeta
2025-01-31T09:51:04Z
5
0
transformers
[ "transformers", "safetensors", "openvino", "llama", "text-generation", "openvino-export", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T09:50:03Z
--- license: mit library_name: transformers tags: - openvino - openvino-export pipeline_tag: text-generation base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B --- This model was converted to OpenVINO from [`deepseek-ai/DeepSeek-R1-Distill-Llama-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space. First make sure you have optimum-intel installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python from optimum.intel import OVModelForCausalLM model_id = "fakezeta/DeepSeek-R1-Distill-Llama-8B-openvino" model = OVModelForCausalLM.from_pretrained(model_id) ```
Denn231/internal_clf_v1
Denn231
2025-01-31T09:51:00Z
15
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-01-31T09:50:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q5_K_M-GGUF
roleplaiapp
2025-01-31T09:48:51Z
8
0
transformers
[ "transformers", "gguf", "32b", "5-bit", "Q5_K_M", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T09:47:25Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32b - 5-bit - Q5_K_M - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q5_K_M-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q5_K_M-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-Q5_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q5_K_M` ## Overview This is a GGUF Q5_K_M quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q4_K_S-GGUF
roleplaiapp
2025-01-31T09:46:42Z
6
0
transformers
[ "transformers", "gguf", "32b", "4-bit", "Q4_K_S", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T09:45:31Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32b - 4-bit - Q4_K_S - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q4_K_S-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q4_K_S-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-Q4_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q4_K_S` ## Overview This is a GGUF Q4_K_S quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
Denn231/external_clf_v1
Denn231
2025-01-31T09:46:24Z
8
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-01-31T09:43:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q4_K_M-GGUF
roleplaiapp
2025-01-31T09:44:47Z
28
0
transformers
[ "transformers", "gguf", "32b", "4-bit", "Q4_K_M", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T09:43:30Z
--- library_name: transformers pipeline_tag: text-generation tags: - 32b - 4-bit - Q4_K_M - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q4_K_M-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q4_K_M-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-Q4_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q4_K_M` ## Overview This is a GGUF Q4_K_M quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q3_K_S-GGUF
roleplaiapp
2025-01-31T09:42:47Z
6
0
transformers
[ "transformers", "gguf", "3-bit", "32b", "Q3_K_S", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T09:41:57Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - 32b - Q3_K_S - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q3_K_S-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q3_K_S-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-Q3_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_S` ## Overview This is a GGUF Q3_K_S quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
amarsaikhan/food_classifier_2025_01_31_00_04
amarsaikhan
2025-01-31T09:42:20Z
190
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-01-31T06:05:12Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: food_classifier_2025_01_31_00_04 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # food_classifier_2025_01_31_00_04 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4920 - Accuracy: 0.8763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 2048 - total_eval_batch_size: 512 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.9401 | 1.0 | 37 | 3.1519 | 0.7044 | | 1.5951 | 2.0 | 74 | 1.1581 | 0.7973 | | 0.916 | 3.0 | 111 | 0.7583 | 0.8228 | | 0.7189 | 4.0 | 148 | 0.6624 | 0.8371 | | 0.5926 | 5.0 | 185 | 0.6070 | 0.8476 | | 0.5456 | 6.0 | 222 | 0.5709 | 0.8553 | | 0.4675 | 7.0 | 259 | 0.5564 | 0.8572 | | 0.4246 | 8.0 | 296 | 0.5465 | 0.8602 | | 0.3732 | 9.0 | 333 | 0.5401 | 0.8627 | | 0.333 | 10.0 | 370 | 0.5197 | 0.8671 | | 0.3067 | 11.0 | 407 | 0.5077 | 0.8712 | | 0.2872 | 12.0 | 444 | 0.5090 | 0.8702 | | 0.2537 | 13.0 | 481 | 0.5066 | 0.8761 | | 0.2496 | 14.0 | 518 | 0.5004 | 0.8750 | | 0.2282 | 15.0 | 555 | 0.4920 | 0.8763 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1 - Datasets 2.19.1 - Tokenizers 0.21.0
roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q3_K_M-GGUF
roleplaiapp
2025-01-31T09:41:14Z
11
0
transformers
[ "transformers", "gguf", "3-bit", "32b", "Q3_K_M", "cyberagent", "deepseek", "distill", "japanese", "llama-cpp", "qwen", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T09:40:13Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - 32b - Q3_K_M - cyberagent - deepseek - distill - gguf - japanese - llama-cpp - qwen - text-generation --- # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q3_K_M-GGUF **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q3_K_M-GGUF` **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf` **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
lesso10/cb7264b5-d439-4f61-b4be-dc1dff101087
lesso10
2025-01-31T09:40:19Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-01-31T09:39:05Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: cb7264b5-d439-4f61-b4be-dc1dff101087 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fee8d932af6f9203_train_data.json ds_type: json format: custom path: /workspace/input_data/fee8d932af6f9203_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso10/cb7264b5-d439-4f61-b4be-dc1dff101087 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/fee8d932af6f9203_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # cb7264b5-d439-4f61-b4be-dc1dff101087 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3736 | 0.6436 | 200 | 10.3779 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nat-hunt/d931f3f2-4874-45e2-a822-77750c90ca54
nat-hunt
2025-01-31T09:38:26Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-01-31T09:38:03Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: d931f3f2-4874-45e2-a822-77750c90ca54 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fee8d932af6f9203_train_data.json ds_type: json format: custom path: /workspace/input_data/fee8d932af6f9203_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: nat-hunt/d931f3f2-4874-45e2-a822-77750c90ca54 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/fee8d932af6f9203_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 wandb_project: Birthday-SN56-4-Gradients-On-Demand wandb_run: your_name wandb_runid: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d931f3f2-4874-45e2-a822-77750c90ca54 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3776 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0032 | 1 | 10.3794 | | 10.3771 | 0.0418 | 13 | 10.3788 | | 10.3768 | 0.0837 | 26 | 10.3780 | | 10.3731 | 0.1255 | 39 | 10.3776 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nttx/91ab1cb8-cd3b-4314-b2aa-2c9e76d512e1
nttx
2025-01-31T09:38:12Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-01-31T09:37:47Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: 91ab1cb8-cd3b-4314-b2aa-2c9e76d512e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fee8d932af6f9203_train_data.json ds_type: json format: custom path: /workspace/input_data/fee8d932af6f9203_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/91ab1cb8-cd3b-4314-b2aa-2c9e76d512e1 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/fee8d932af6f9203_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 91ab1cb8-cd3b-4314-b2aa-2c9e76d512e1 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 156 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3777 | 0.9968 | 155 | 10.3753 | | 18.171 | 1.0048 | 156 | 10.3753 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robiulawaldev/b804fc6d-0baa-4563-8bf8-775b2a8a74cb
robiulawaldev
2025-01-31T09:37:46Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-01-31T09:37:23Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: b804fc6d-0baa-4563-8bf8-775b2a8a74cb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fee8d932af6f9203_train_data.json ds_type: json format: custom path: /workspace/input_data/fee8d932af6f9203_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/b804fc6d-0baa-4563-8bf8-775b2a8a74cb hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: constant max_steps: 55 micro_batch_size: 2 mlflow_experiment_name: /tmp/fee8d932af6f9203_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 wandb_project: Birthday-SN56-36-Gradients-On-Demand wandb_run: your_name wandb_runid: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b804fc6d-0baa-4563-8bf8-775b2a8a74cb This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 55 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0016 | 1 | 10.3792 | | 10.3779 | 0.0225 | 14 | 10.3762 | | 10.3729 | 0.0451 | 28 | 10.3701 | | 10.3618 | 0.0676 | 42 | 10.3552 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shibajustfor/b0092ca9-0510-42eb-b7a6-1e347e0d6efa
shibajustfor
2025-01-31T09:37:45Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-01-31T09:37:25Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: b0092ca9-0510-42eb-b7a6-1e347e0d6efa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fee8d932af6f9203_train_data.json ds_type: json format: custom path: /workspace/input_data/fee8d932af6f9203_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: shibajustfor/b0092ca9-0510-42eb-b7a6-1e347e0d6efa hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/fee8d932af6f9203_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 wandb_project: Birthday-SN56-11-Gradients-On-Demand wandb_run: your_name wandb_runid: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b0092ca9-0510-42eb-b7a6-1e347e0d6efa This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3777 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0032 | 1 | 10.3794 | | 10.3771 | 0.0418 | 13 | 10.3788 | | 10.3769 | 0.0837 | 26 | 10.3781 | | 10.3731 | 0.1255 | 39 | 10.3777 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
baby-dev/d54a57e0-d2d7-44c1-879a-158c9e383012
baby-dev
2025-01-31T09:37:44Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-01-31T09:37:19Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: d54a57e0-d2d7-44c1-879a-158c9e383012 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fee8d932af6f9203_train_data.json ds_type: json format: custom path: /workspace/input_data/fee8d932af6f9203_train_data.json type: field_instruction: abstract field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: baby-dev/d54a57e0-d2d7-44c1-879a-158c9e383012 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 2 mlflow_experiment_name: /tmp/fee8d932af6f9203_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 wandb_project: SN56-41 wandb_run: your_name wandb_runid: 1cfdf75e-d1c9-419a-b338-98971e8ecff0 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d54a57e0-d2d7-44c1-879a-158c9e383012 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3773 | 0.0032 | 1 | 10.3794 | | 10.3595 | 0.0805 | 25 | 10.3782 | | 10.3734 | 0.1609 | 50 | 10.3765 | | 10.3685 | 0.2414 | 75 | 10.3753 | | 10.3867 | 0.3218 | 100 | 10.3750 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso07/8437c70e-7cb2-42e9-8781-87e2c0558b05
lesso07
2025-01-31T09:36:07Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf", "base_model:adapter:NousResearch/CodeLlama-7b-hf", "region:us" ]
null
2025-01-31T08:16:34Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf tags: - axolotl - generated_from_trainer model-index: - name: 8437c70e-7cb2-42e9-8781-87e2c0558b05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a80f531073244c9f_train_data.json ds_type: json format: custom path: /workspace/input_data/a80f531073244c9f_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso07/8437c70e-7cb2-42e9-8781-87e2c0558b05 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/a80f531073244c9f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 846f22c8-74e1-47e8-9e98-11b3498ed786 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 846f22c8-74e1-47e8-9e98-11b3498ed786 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 8437c70e-7cb2-42e9-8781-87e2c0558b05 This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2515 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 9.3849 | 0.0327 | 200 | 2.2515 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/cad4ab75-0c0c-4c0c-b709-2a2f44411751
mrferr3t
2025-01-31T09:35:09Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "license:other", "region:us" ]
null
2025-01-31T09:27:03Z
--- library_name: peft license: other base_model: huggyllama/llama-7b tags: - axolotl - generated_from_trainer model-index: - name: cad4ab75-0c0c-4c0c-b709-2a2f44411751 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: huggyllama/llama-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 374d415fa346ac2b_train_data.json ds_type: json format: custom path: /workspace/input_data/374d415fa346ac2b_train_data.json type: field_input: prompt_setting field_instruction: prompt field_output: completion format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/cad4ab75-0c0c-4c0c-b709-2a2f44411751 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/374d415fa346ac2b_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4f915252-86ce-4fac-8a8b-ab5ecbcf4eac wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 4f915252-86ce-4fac-8a8b-ab5ecbcf4eac warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cad4ab75-0c0c-4c0c-b709-2a2f44411751 This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8875 | 0.0001 | 1 | 2.6115 | | 0.2348 | 0.0063 | 50 | 0.4933 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
Tarek07/Progenitor-V1.2-LLaMa-70B
Tarek07
2025-01-31T09:34:24Z
140
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:Sao10K/70B-L3.3-Cirrus-x1", "base_model:merge:Sao10K/70B-L3.3-Cirrus-x1", "base_model:Sao10K/L3.1-70B-Hanami-x1", "base_model:merge:Sao10K/L3.1-70B-Hanami-x1", "base_model:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:TheDrummer/Anubis-70B-v1", "base_model:merge:TheDrummer/Anubis-70B-v1", "base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B", "base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B", "license:llama3.3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-28T07:21:07Z
--- base_model: - TheDrummer/Anubis-70B-v1 - Sao10K/L3.1-70B-Hanami-x1 - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - nbeerbower/Llama-3.1-Nemotron-lorablated-70B - SicariusSicariiStuff/Negative_LLAMA_70B - Sao10K/70B-L3.3-Cirrus-x1 library_name: transformers tags: - mergekit - merge license: llama3.3 --- Through my wanderings around huggingface I came across a model merging method I had not seen before and decided to test it out using the ingredients from my Progenitor merges. I am not sure if it's because of SicariusSicariiStuff/Negative_LLAMA_70B as the pivot model, but it seems a lot 'hornier'. Its style is nice, but I am not sure if it outright beats Progenitor 1.1 (techinically it is the same incredients only mixed differently.) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B) as a base. ### Models Merged The following models were included in the merge: * [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) * [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1) * [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1) * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [Sao10K/70B-L3.3-Cirrus-x1](https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: # Pivot model - model: SicariusSicariiStuff/Negative_LLAMA_70B # Target models - model: Sao10K/70B-L3.3-Cirrus-x1 - model: Sao10K/L3.1-70B-Hanami-x1 - model: TheDrummer/Anubis-70B-v1 - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 merge_method: sce base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B parameters: select_topk: 1.0 dtype: bfloat16 ```
NikolayKozloff/DeepSeek-R1-Distill-Qwen-14B-Multilingual-Q4_K_M-GGUF
NikolayKozloff
2025-01-31T09:34:16Z
278
1
null
[ "gguf", "reasoning", "llama-cpp", "gguf-my-repo", "am", "ar", "bn", "zh", "cs", "nl", "en", "fr", "de", "el", "ha", "he", "hi", "id", "it", "ja", "jv", "km", "ko", "lo", "ms", "mr", "fa", "pl", "pt", "ro", "ru", "es", "sw", "sv", "tl", "ta", "te", "th", "tr", "uk", "ur", "vi", "dataset:lightblue/reasoning-multilingual-R1-Llama-70B-train", "base_model:lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual", "base_model:quantized:lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T09:33:36Z
--- language: - am - ar - bn - zh - cs - nl - en - fr - de - el - ha - he - hi - id - it - ja - jv - km - ko - lo - ms - mr - fa - pl - pt - ro - ru - es - sw - sv - tl - ta - te - th - tr - uk - ur - vi license: apache-2.0 datasets: - lightblue/reasoning-multilingual-R1-Llama-70B-train tags: - reasoning - llama-cpp - gguf-my-repo base_model: lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual --- # NikolayKozloff/DeepSeek-R1-Distill-Qwen-14B-Multilingual-Q4_K_M-GGUF This model was converted to GGUF format from [`lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual`](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-14B-Multilingual) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/DeepSeek-R1-Distill-Qwen-14B-Multilingual-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-multilingual-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/DeepSeek-R1-Distill-Qwen-14B-Multilingual-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-multilingual-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/DeepSeek-R1-Distill-Qwen-14B-Multilingual-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-multilingual-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/DeepSeek-R1-Distill-Qwen-14B-Multilingual-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-multilingual-q4_k_m.gguf -c 2048 ```