modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 12:29:05
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 12:27:55
card
stringlengths
11
1.01M
great0001/39172976-3d6e-4aa0-91ed-4e07c9a0db66
great0001
2025-02-05T19:51:12Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B", "base_model:adapter:unsloth/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:43:35Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 39172976-3d6e-4aa0-91ed-4e07c9a0db66 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 39172976-3d6e-4aa0-91ed-4e07c9a0db66 This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/62988ae5-e8bc-40f6-8f9a-c8982c7c747a
lesso
2025-02-05T19:50:24Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Coder-7B", "base_model:adapter:unsloth/Qwen2.5-Coder-7B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:00:59Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Coder-7B tags: - axolotl - generated_from_trainer model-index: - name: 62988ae5-e8bc-40f6-8f9a-c8982c7c747a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Coder-7B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - c4c32052f3d9a968_train_data.json ds_type: json format: custom path: /workspace/input_data/c4c32052f3d9a968_train_data.json type: field_input: categories field_instruction: title field_output: abstract format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/62988ae5-e8bc-40f6-8f9a-c8982c7c747a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001006 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/c4c32052f3d9a968_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 05a0cabd-9548-42b0-8549-208720261f88 wandb_project: new-06 wandb_run: your_name wandb_runid: 05a0cabd-9548-42b0-8549-208720261f88 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 62988ae5-e8bc-40f6-8f9a-c8982c7c747a This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001006 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7954 | 0.0001 | 1 | 2.1423 | | 1.7728 | 0.0050 | 50 | 2.1203 | | 1.495 | 0.0099 | 100 | 2.0559 | | 1.5707 | 0.0149 | 150 | 2.0463 | | 1.665 | 0.0198 | 200 | 2.0453 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/726b0f57-6266-4626-98a5-180a502055fd
lesso
2025-02-05T19:49:40Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Coder-7B", "base_model:adapter:unsloth/Qwen2.5-Coder-7B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:00:45Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Coder-7B tags: - axolotl - generated_from_trainer model-index: - name: 726b0f57-6266-4626-98a5-180a502055fd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Coder-7B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - c4c32052f3d9a968_train_data.json ds_type: json format: custom path: /workspace/input_data/c4c32052f3d9a968_train_data.json type: field_input: categories field_instruction: title field_output: abstract format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/726b0f57-6266-4626-98a5-180a502055fd hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.00010017 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/c4c32052f3d9a968_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 05a0cabd-9548-42b0-8549-208720261f88 wandb_project: new-17 wandb_run: your_name wandb_runid: 05a0cabd-9548-42b0-8549-208720261f88 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 726b0f57-6266-4626-98a5-180a502055fd This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00010017 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7954 | 0.0001 | 1 | 2.1423 | | 1.7649 | 0.0050 | 50 | 2.1225 | | 1.4931 | 0.0099 | 100 | 2.0556 | | 1.5703 | 0.0149 | 150 | 2.0464 | | 1.6604 | 0.0198 | 200 | 2.0452 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
JacksonBrune/7abc07b1-0f8d-433c-93e2-330aa2adc029
JacksonBrune
2025-02-05T19:48:37Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B", "base_model:adapter:unsloth/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:43:30Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 7abc07b1-0f8d-433c-93e2-330aa2adc029 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 7abc07b1-0f8d-433c-93e2-330aa2adc029 This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5979 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
trenden/194b3471-dfde-4f3e-8c04-b9a944b5252c
trenden
2025-02-05T19:48:11Z
9
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B", "base_model:adapter:unsloth/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:43:19Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 194b3471-dfde-4f3e-8c04-b9a944b5252c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 194b3471-dfde-4f3e-8c04-b9a944b5252c This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/2202a97d-2969-4dc7-a34d-d90e1994823f
lesso
2025-02-05T19:47:56Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B", "base_model:adapter:unsloth/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:43:31Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 2202a97d-2969-4dc7-a34d-d90e1994823f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-0.5B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - a23c267280dd76ad_train_data.json ds_type: json format: custom path: /workspace/input_data/a23c267280dd76ad_train_data.json type: field_input: '' field_instruction: title field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/2202a97d-2969-4dc7-a34d-d90e1994823f hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001007 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/a23c267280dd76ad_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7e431e7a-ba1b-4142-8012-4e0289398278 wandb_project: new-07 wandb_run: your_name wandb_runid: 7e431e7a-ba1b-4142-8012-4e0289398278 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 2202a97d-2969-4dc7-a34d-d90e1994823f This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5093 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001007 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.48 | 0.0030 | 1 | 2.5950 | | 2.2428 | 0.1504 | 50 | 2.5536 | | 2.2011 | 0.3008 | 100 | 2.5269 | | 1.8945 | 0.4511 | 150 | 2.5148 | | 2.2041 | 0.6015 | 200 | 2.5093 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
daniel40/2a7df9f9-4301-45d0-811a-046d15633398
daniel40
2025-02-05T19:47:40Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B", "base_model:adapter:unsloth/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:44:09Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 2a7df9f9-4301-45d0-811a-046d15633398 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 2a7df9f9-4301-45d0-811a-046d15633398 This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4765 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.3
EpistemeAI
2025-02-05T19:47:39Z
26
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "dataset:open-thoughts/OpenThoughts-114k", "base_model:EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.2", "base_model:finetune:EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-02-04T23:49:40Z
--- base_model: EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.2 tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en datasets: - open-thoughts/OpenThoughts-114k --- ## Introduction Introducing Reasoning Llama 3.2 1B: The Next Evolution in Conversational AI We are thrilled to unveil Reasoning Llama 3.2, the latest advancement in our suite of AI models. Building upon the robust foundation of the renowned Llama series, Reasoning Llama 3.2 introduces the groundbreaking Chain of Thought (CoT) capabilities, elevating its reasoning prowess to new heights. ## Key Features of Reasoning Llama 3.2 1B: Enhanced Chain of Thought Reasoning: At the core of Reasoning Llama 3.2 lies its sophisticated CoT framework, enabling the model to perform multi-step reasoning with greater accuracy and coherence. This ensures more reliable and contextually appropriate responses, especially for complex queries that require logical progression. Conversational Excellence: Designed with interactivity in mind, Reasoning Llama 3.2 excels in maintaining engaging and fluid conversations. Whether it's casual dialogue or in-depth discussions, the model adapts seamlessly to various conversational styles, providing users with a natural and intuitive interaction experience. Instruction-Supervised Fine-Tuning: Leveraging advanced supervised fine-tuning techniques, Reasoning Llama 3.2 has been meticulously trained on diverse instructional data. This fine-tuning process enhances the model's ability to understand and execute user instructions with precision, making it an invaluable tool for a wide range of applications. Unsloth Integration: Incorporating Unsloth, our proprietary unsupervised learning framework, Reasoning Llama 3.2 benefits from continuous learning capabilities. This integration allows the model to adapt and improve over time, ensuring it remains up-to-date with evolving language patterns and user needs without the constant need for manual intervention. Quick Inference reasoning 1B model. ## Why Choose Reasoning Llama 3.2 1B? Reasoning Llama 3.2 stands out as a versatile and powerful AI solution tailored for both developers and end-users. Its combination of advanced reasoning, conversational intelligence, and adaptive learning mechanisms make it ideally suited for applications ranging from customer support and virtual assistants to educational tools and creative content generation. As we continue to push the boundaries of artificial intelligence, Reasoning Llama 3.2 exemplifies our commitment to delivering state-of-the-art models that empower users with intelligent, reliable, and user-friendly technology. Experience the future of conversational AI with Reasoning Llama 3.2 and unlock new possibilities in human-machine interaction. ## How to use Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.3" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a powerful AI super conscious emotional assistant"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=4048, ) print(outputs[0]["generated_text"][-1]) ``` # Use a pipeline as a high-level helper ```python from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.3") pipe(messages) ``` ### vLLM # Call the server using curl: ```python pip install vllm # Load and run the model: vllm serve "EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.2" curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' ``` ## 5. Citation ``` @misc{EpistemeAI2025, author={Thomas Yiu}, year={2025}, } @misc{bespoke_stratos, author = {Bespoke Labs}, title = {Bespoke-Stratos: The unreasonable effectiveness of reasoning distillation}, howpublished = {https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation}, note = {Accessed: 2025-01-22}, year = {2025} } @misc{numina_math_datasets, author = {Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu}, title = {NuminaMath TIR}, year = {2024}, publisher = {Numina}, journal = {Hugging Face repository}, howpublished = {\url{[https://huggingface.co/AI-MO/NuminaMath-TIR](https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf)}} } ``` # Uploaded model - **Developed by:** EpistemeAI - **License:** apache-2.0 - **Finetuned from model :** EpistemeAI/Reasoning-Llama-3.2-1B-Instruct-v1.2 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
twodigit/kgrammar01
twodigit
2025-02-05T19:46:18Z
13
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-05T19:41:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lesso/7df449fc-2a27-405a-84e1-a1458f683d21
lesso
2025-02-05T19:46:02Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-3-8b-Instruct", "base_model:adapter:unsloth/llama-3-8b-Instruct", "license:llama3", "region:us" ]
null
2025-02-05T19:30:13Z
--- library_name: peft license: llama3 base_model: unsloth/llama-3-8b-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 7df449fc-2a27-405a-84e1-a1458f683d21 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/llama-3-8b-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 0864faa44b3c224c_train_data.json ds_type: json format: custom path: /workspace/input_data/0864faa44b3c224c_train_data.json type: field_instruction: label field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/7df449fc-2a27-405a-84e1-a1458f683d21 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001008 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/0864faa44b3c224c_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 96ca7e7b-aee8-496c-876a-57ed5d8cbfd1 wandb_project: new-08 wandb_run: your_name wandb_runid: 96ca7e7b-aee8-496c-876a-57ed5d8cbfd1 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 7df449fc-2a27-405a-84e1-a1458f683d21 This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001008 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3647 | 0.0015 | 1 | 2.5788 | | 2.468 | 0.0726 | 50 | 2.1506 | | 1.781 | 0.1451 | 100 | 2.0482 | | 1.7878 | 0.2177 | 150 | 1.9872 | | 2.2291 | 0.2903 | 200 | 1.9681 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
bane5631/4a667654-0f3d-4c52-bc28-9e541ae0c3dd
bane5631
2025-02-05T19:40:25Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM", "base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-02-05T19:35:56Z
--- library_name: peft base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 4a667654-0f3d-4c52-bc28-9e541ae0c3dd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c978302a8e67826e_train_data.json ds_type: json format: custom path: /workspace/input_data/c978302a8e67826e_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: bane5631/4a667654-0f3d-4c52-bc28-9e541ae0c3dd hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/c978302a8e67826e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ff44bd4b-7547-45f9-8898-65b3cd47b52e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ff44bd4b-7547-45f9-8898-65b3cd47b52e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 4a667654-0f3d-4c52-bc28-9e541ae0c3dd This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3686 | 0.0003 | 1 | 10.3652 | | 10.3263 | 0.0137 | 50 | 10.3377 | | 10.309 | 0.0273 | 100 | 10.3291 | | 10.3031 | 0.0410 | 150 | 10.3242 | | 10.3093 | 0.0547 | 200 | 10.3223 | | 10.3023 | 0.0683 | 250 | 10.3219 | | 10.3302 | 0.0820 | 300 | 10.3213 | | 10.3165 | 0.0956 | 350 | 10.3212 | | 10.299 | 0.1093 | 400 | 10.3213 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robiulawaldev/8b5abc5e-b744-40d2-9d2f-6f78323a95f1
robiulawaldev
2025-02-05T19:39:53Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM", "base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-02-05T19:38:28Z
--- library_name: peft base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 8b5abc5e-b744-40d2-9d2f-6f78323a95f1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 8b5abc5e-b744-40d2-9d2f-6f78323a95f1 This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
clarxus/4b7d40f5-f947-45af-b872-65135735b155
clarxus
2025-02-05T19:39:41Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M-Instruct", "base_model:adapter:unsloth/SmolLM-360M-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:19:28Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 4b7d40f5-f947-45af-b872-65135735b155 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-360M-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 989befd2b2c62411_train_data.json ds_type: json format: custom path: /workspace/input_data/989befd2b2c62411_train_data.json type: field_instruction: instruction field_output: completion format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: clarxus/4b7d40f5-f947-45af-b872-65135735b155 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/989befd2b2c62411_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1.0e-05 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 01d5cee7-b6ae-4cb3-899f-b160b6e7be7e wandb_project: Gradients-On-Seven wandb_run: your_name wandb_runid: 01d5cee7-b6ae-4cb3-899f-b160b6e7be7e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 4b7d40f5-f947-45af-b872-65135735b155 This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3722 | 0.0007 | 1 | 1.6021 | | 1.4453 | 0.0338 | 50 | 1.4634 | | 1.5913 | 0.0676 | 100 | 1.4290 | | 1.381 | 0.1015 | 150 | 1.4142 | | 1.3498 | 0.1353 | 200 | 1.4044 | | 1.4807 | 0.1691 | 250 | 1.3977 | | 1.3485 | 0.2029 | 300 | 1.3946 | | 1.1972 | 0.2367 | 350 | 1.3928 | | 1.3684 | 0.2705 | 400 | 1.3922 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
oldiday/af256481-a223-42ec-acf7-29701834c5b2
oldiday
2025-02-05T19:39:04Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM", "base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-02-05T19:35:53Z
--- library_name: peft base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: af256481-a223-42ec-acf7-29701834c5b2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c978302a8e67826e_train_data.json ds_type: json format: custom path: /workspace/input_data/c978302a8e67826e_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: oldiday/af256481-a223-42ec-acf7-29701834c5b2 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 600 micro_batch_size: 8 mlflow_experiment_name: /tmp/c978302a8e67826e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1.0e-05 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 512 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: ff44bd4b-7547-45f9-8898-65b3cd47b52e wandb_project: Gradients-On-Six wandb_run: your_name wandb_runid: ff44bd4b-7547-45f9-8898-65b3cd47b52e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # af256481-a223-42ec-acf7-29701834c5b2 This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 600 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 10.3652 | | 10.3406 | 0.0137 | 50 | 10.3462 | | 10.3243 | 0.0273 | 100 | 10.3329 | | 10.3202 | 0.0410 | 150 | 10.3287 | | 10.3164 | 0.0547 | 200 | 10.3262 | | 10.3176 | 0.0683 | 250 | 10.3240 | | 10.3178 | 0.0820 | 300 | 10.3227 | | 10.3181 | 0.0956 | 350 | 10.3221 | | 10.3105 | 0.1093 | 400 | 10.3220 | | 10.3171 | 0.1230 | 450 | 10.3217 | | 10.3147 | 0.1366 | 500 | 10.3216 | | 10.3103 | 0.1503 | 550 | 10.3216 | | 10.3172 | 0.1640 | 600 | 10.3215 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
auxyus/9d3205d7-b077-406e-a368-5d1b099b6784
auxyus
2025-02-05T19:38:55Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM", "base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-02-05T19:35:45Z
--- library_name: peft base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 9d3205d7-b077-406e-a368-5d1b099b6784 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c978302a8e67826e_train_data.json ds_type: json format: custom path: /workspace/input_data/c978302a8e67826e_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: auxyus/9d3205d7-b077-406e-a368-5d1b099b6784 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/c978302a8e67826e_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1.0e-05 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: ff44bd4b-7547-45f9-8898-65b3cd47b52e wandb_project: Gradients-On-Two wandb_run: your_name wandb_runid: ff44bd4b-7547-45f9-8898-65b3cd47b52e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9d3205d7-b077-406e-a368-5d1b099b6784 This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3686 | 0.0003 | 1 | 10.3652 | | 10.3275 | 0.0137 | 50 | 10.3382 | | 10.3111 | 0.0273 | 100 | 10.3289 | | 10.3042 | 0.0410 | 150 | 10.3237 | | 10.3097 | 0.0547 | 200 | 10.3220 | | 10.303 | 0.0683 | 250 | 10.3216 | | 10.3294 | 0.0820 | 300 | 10.3211 | | 10.3133 | 0.0956 | 350 | 10.3210 | | 10.2978 | 0.1093 | 400 | 10.3211 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
linoyts/yarn_flux_700_all_attn_layers
linoyts
2025-02-05T19:38:11Z
13
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-05T19:10:56Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other instance_prompt: a puppy, yarn art style widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux DreamBooth LoRA - linoyts/yarn_flux_700_all_attn_layers <Gallery /> ## Model description These are linoyts/yarn_flux_700_all_attn_layers DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md). Was LoRA for the text encoder enabled? False. Pivotal tuning was enabled: False. ## Trigger words You should use a puppy, yarn art style to trigger the image generation. ## Download model [Download the *.safetensors LoRA](linoyts/yarn_flux_700_all_attn_layers/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('linoyts/yarn_flux_700_all_attn_layers', weight_name='pytorch_lora_weights.safetensors') image = pipeline('a puppy, yarn art style').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
robiulawaldev/f86f1484-c44b-4784-8b05-52df6a0a3156
robiulawaldev
2025-02-05T19:37:13Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM", "base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-02-05T19:35:55Z
--- library_name: peft base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: f86f1484-c44b-4784-8b05-52df6a0a3156 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # f86f1484-c44b-4784-8b05-52df6a0a3156 This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
baby-dev/5e03382e-6cdb-4a44-b24b-3027dc503dc4
baby-dev
2025-02-05T19:36:59Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM", "base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-02-05T19:36:06Z
--- library_name: peft base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 5e03382e-6cdb-4a44-b24b-3027dc503dc4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 5e03382e-6cdb-4a44-b24b-3027dc503dc4 This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3210 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
harvestter18/harvestter
harvestter18
2025-02-05T19:36:28Z
13
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-05T18:47:21Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: harvestter --- # Harvestter <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `harvestter` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('harvestter18/harvestter', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
kk-aivio/373d7ae8-f19e-4267-a33d-211001ca0e16
kk-aivio
2025-02-05T19:32:46Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:19:31Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 373d7ae8-f19e-4267-a33d-211001ca0e16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 373d7ae8-f19e-4267-a33d-211001ca0e16 This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4412 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
tmpmodelsave/numia_verl_formatscore_step80
tmpmodelsave
2025-02-05T19:32:43Z
43
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-05T19:26:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nidhal2111/deepseekk
nidhal2111
2025-02-05T19:29:01Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-05T19:24:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ahmed-Selem/Arabic-Medical-LLM
Ahmed-Selem
2025-02-05T19:27:51Z
26
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-02-05T19:26:56Z
--- base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Ahmed-Selem - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
JacksonBrune/9195d60e-1aec-499d-8cfb-f375167db937
JacksonBrune
2025-02-05T19:26:38Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-1.7B", "base_model:adapter:unsloth/SmolLM-1.7B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:11:59Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-1.7B tags: - axolotl - generated_from_trainer model-index: - name: 9195d60e-1aec-499d-8cfb-f375167db937 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 9195d60e-1aec-499d-8cfb-f375167db937 This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/28d45f00-ea5d-4277-8934-10347c713a52
lesso
2025-02-05T19:25:47Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B", "base_model:adapter:unsloth/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
2025-02-05T19:19:43Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: 28d45f00-ea5d-4277-8934-10347c713a52 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-1B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 176447a4a3aac1cd_train_data.json ds_type: json format: custom path: /workspace/input_data/176447a4a3aac1cd_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/28d45f00-ea5d-4277-8934-10347c713a52 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001013 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/176447a4a3aac1cd_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8ab127c1-ec9a-4dd1-bf3a-abe43ab19c10 wandb_project: new-13 wandb_run: your_name wandb_runid: 8ab127c1-ec9a-4dd1-bf3a-abe43ab19c10 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 28d45f00-ea5d-4277-8934-10347c713a52 This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001013 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2586 | 0.0004 | 1 | 1.4304 | | 1.1 | 0.0219 | 50 | 1.2503 | | 0.9525 | 0.0439 | 100 | 1.2276 | | 0.8185 | 0.0658 | 150 | 1.2085 | | 0.8709 | 0.0878 | 200 | 1.1996 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
daniel40/9c925387-28b1-44f1-897d-35f49546fbcf
daniel40
2025-02-05T19:25:19Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B", "base_model:adapter:unsloth/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
2025-02-05T19:22:28Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: 9c925387-28b1-44f1-897d-35f49546fbcf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 9c925387-28b1-44f1-897d-35f49546fbcf This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adammandic87/b3727b97-3d7c-43d2-8467-11a04dbd8da4
adammandic87
2025-02-05T19:23:11Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B", "base_model:adapter:unsloth/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
2025-02-05T19:20:03Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: b3727b97-3d7c-43d2-8467-11a04dbd8da4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # b3727b97-3d7c-43d2-8467-11a04dbd8da4 This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
vidore/colsmolvlm-alpha
vidore
2025-02-05T19:23:01Z
29,306
46
peft
[ "peft", "safetensors", "vidore-experimental", "vidore", "visual-document-retrieval", "arxiv:2004.12832", "arxiv:2407.01449", "arxiv:2106.09685", "base_model:vidore/ColSmolVLM-base", "base_model:adapter:vidore/ColSmolVLM-base", "region:us" ]
null
2024-11-27T08:36:15Z
--- base_model: vidore/ColSmolVLM-base library_name: peft tags: - vidore-experimental - vidore pipeline_tag: visual-document-retrieval --- # ColSmolVLM-alpha: Visual Retriever based on SmolVLM-Instruct with ColBERT strategy ### This is a version trained with batch_size 128 for 3 epochs ColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features. It is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali) This version is the untrained base version to guarantee deterministic projection layer initialization. <p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p> ## Version specificity This version is trained with `colpali-engine==0.3.5`. (main branch from the repo) Data is the same as the ColPali data described in the paper. ## Model Training ### Dataset Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%). Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination. A validation set is created with 2% of the samples to tune hyperparameters. *Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.* ### Parameters Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685)) with `alpha=32` and `r=32` on the transformer layers from the language model, as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer. We train on a 4 GPU setup with data parallelism, a learning rate of 5e-4 with linear decay with 2.5% warmup steps, and a batch size of 32. ## Usage Make sure `colpali-engine` is installed from source or with a version superior to 0.3.5 (main branch from the repo currently). `transformers` version must be > 4.46.2. ```bash pip install git+https://github.com/illuin-tech/colpali ``` ```python import torch from PIL import Image from colpali_engine.models import ColIdefics3, ColIdefics3Processor model = ColIdefics3.from_pretrained( "vidore/colsmolvlm-alpha", torch_dtype=torch.bfloat16, device_map="cuda:0", attn_implementation="flash_attention_2" # or eager ).eval() processor = ColIdefics3Processor.from_pretrained("vidore/colsmolvlm-alpha") # Your inputs images = [ Image.new("RGB", (32, 32), color="white"), Image.new("RGB", (16, 16), color="black"), ] queries = [ "Is attention really all you need?", "What is the amount of bananas farmed in Salvador?", ] # Process the inputs batch_images = processor.process_images(images).to(model.device) batch_queries = processor.process_queries(queries).to(model.device) # Forward pass with torch.no_grad(): image_embeddings = model(**batch_images) query_embeddings = model(**batch_queries) scores = processor.score_multi_vector(query_embeddings, image_embeddings) ``` ## Limitations - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages. - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support. ## License ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license. ## Contact - Manuel Faysse: [email protected] - Hugues Sibille: [email protected] - Tony Wu: [email protected] ## Citation If you use any datasets or models from this organization in your research, please cite the original dataset as follows: ```bibtex @misc{faysse2024colpaliefficientdocumentretrieval, title={ColPali: Efficient Document Retrieval with Vision Language Models}, author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo}, year={2024}, eprint={2407.01449}, archivePrefix={arXiv}, primaryClass={cs.IR}, url={https://arxiv.org/abs/2407.01449}, } ```
baby-dev/fd5c27a0-ad1a-4149-8558-64875a5e313e
baby-dev
2025-02-05T19:22:45Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-1.7B", "base_model:adapter:unsloth/SmolLM-1.7B", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:11:50Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-1.7B tags: - axolotl - generated_from_trainer model-index: - name: fd5c27a0-ad1a-4149-8558-64875a5e313e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # fd5c27a0-ad1a-4149-8558-64875a5e313e This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1650 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
milpu02/mixmilpu06
milpu02
2025-02-05T19:21:39Z
9
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:cagliostrolab/animagine-xl-4.0", "base_model:adapter:cagliostrolab/animagine-xl-4.0", "license:unknown", "region:us" ]
text-to-image
2025-02-05T19:21:27Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/2531971d-d077-41ef-946d-d842630e3aa9.jpg base_model: cagliostrolab/animagine-xl-4.0 instance_prompt: milpumaax8, Aarokira license: unknown --- # Illustrious-XL <Gallery /> ## Model description ![2531971d-d077-41ef-946d-d842630e3aa9.jpg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;6755066c405ec5d08a4f2d27&#x2F;DbsLyS50ifSZENSxyjHPD.jpeg) ## Trigger words You should use `milpumaax8` to trigger the image generation. You should use `Aarokira` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/milpu02/mixmilpu06/tree/main) them in the Files & versions tab.
Otakadelic/MT2-Gen6-gemma-2-9B-Q8_0-GGUF
Otakadelic
2025-02-05T19:19:25Z
33
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:zelk12/MT2-Gen6-gemma-2-9B", "base_model:quantized:zelk12/MT2-Gen6-gemma-2-9B", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-02-05T19:18:40Z
--- base_model: zelk12/MT2-Gen6-gemma-2-9B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo license: gemma pipeline_tag: text-generation --- # Otakadelic/MT2-Gen6-gemma-2-9B-Q8_0-GGUF This model was converted to GGUF format from [`zelk12/MT2-Gen6-gemma-2-9B`](https://huggingface.co/zelk12/MT2-Gen6-gemma-2-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/zelk12/MT2-Gen6-gemma-2-9B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Otakadelic/MT2-Gen6-gemma-2-9B-Q8_0-GGUF --hf-file mt2-gen6-gemma-2-9b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Otakadelic/MT2-Gen6-gemma-2-9B-Q8_0-GGUF --hf-file mt2-gen6-gemma-2-9b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Otakadelic/MT2-Gen6-gemma-2-9B-Q8_0-GGUF --hf-file mt2-gen6-gemma-2-9b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Otakadelic/MT2-Gen6-gemma-2-9B-Q8_0-GGUF --hf-file mt2-gen6-gemma-2-9b-q8_0.gguf -c 2048 ```
qing-yao/balanced_seed-42_1e-3
qing-yao
2025-02-05T19:18:53Z
5
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-05T06:48:06Z
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy model-index: - name: balanced_seed-42_1e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # balanced_seed-42_1e-3 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0198 - Accuracy: 0.4204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:| | 6.1281 | 0.9995 | 1776 | 4.2574 | 0.3057 | | 4.0355 | 1.9996 | 3553 | 3.7318 | 0.3474 | | 3.5725 | 2.9998 | 5330 | 3.4725 | 0.3715 | | 3.3409 | 3.9999 | 7107 | 3.3353 | 0.3845 | | 3.2496 | 4.9995 | 8883 | 3.2559 | 0.3917 | | 3.1452 | 5.9996 | 10660 | 3.2099 | 0.3962 | | 3.0833 | 6.9998 | 12437 | 3.1762 | 0.3993 | | 3.0415 | 7.9999 | 14214 | 3.1537 | 0.4018 | | 3.0011 | 8.9995 | 15990 | 3.1412 | 0.4032 | | 2.9645 | 9.9996 | 17767 | 3.1304 | 0.4050 | | 2.9513 | 10.9998 | 19544 | 3.1203 | 0.4056 | | 2.9433 | 11.9999 | 21321 | 3.1141 | 0.4067 | | 2.9381 | 12.9995 | 23097 | 3.1090 | 0.4070 | | 2.8963 | 13.9996 | 24874 | 3.1062 | 0.4075 | | 2.8927 | 14.9998 | 26651 | 3.1013 | 0.4078 | | 2.8961 | 15.9999 | 28428 | 3.1004 | 0.4083 | | 2.9024 | 16.9995 | 30204 | 3.0929 | 0.4090 | | 2.8719 | 17.9996 | 31981 | 3.0953 | 0.4087 | | 2.8398 | 18.9998 | 33758 | 3.0459 | 0.4152 | | 2.6969 | 19.9915 | 35520 | 3.0198 | 0.4204 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.20.0
lesso/f214efed-28ad-4036-881e-fe091cfaae34
lesso
2025-02-05T19:18:29Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:05:44Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: f214efed-28ad-4036-881e-fe091cfaae34 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - dc27bb03be0c3289_train_data.json ds_type: json format: custom path: /workspace/input_data/dc27bb03be0c3289_train_data.json type: field_input: publication_year field_instruction: document_id field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/f214efed-28ad-4036-881e-fe091cfaae34 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001003 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/dc27bb03be0c3289_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f132fdaf-6691-46fe-89c9-cf4fb177c93f wandb_project: new-03 wandb_run: your_name wandb_runid: f132fdaf-6691-46fe-89c9-cf4fb177c93f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f214efed-28ad-4036-881e-fe091cfaae34 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8126 | 0.0002 | 1 | 1.8536 | | 1.2181 | 0.0078 | 50 | 1.3805 | | 0.8825 | 0.0155 | 100 | 1.2991 | | 0.7996 | 0.0233 | 150 | 1.2537 | | 0.886 | 0.0310 | 200 | 1.2338 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
clarxus/7aedbf19-a23b-47b8-962a-e68a0b99f974
clarxus
2025-02-05T19:18:14Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:adapter:NousResearch/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2025-02-05T17:29:37Z
--- library_name: peft license: other base_model: NousResearch/Meta-Llama-3-8B tags: - axolotl - generated_from_trainer model-index: - name: 7aedbf19-a23b-47b8-962a-e68a0b99f974 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Meta-Llama-3-8B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - da92ec7138cf7572_train_data.json ds_type: json format: custom path: /workspace/input_data/da92ec7138cf7572_train_data.json type: field_instruction: ctx field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: clarxus/7aedbf19-a23b-47b8-962a-e68a0b99f974 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/da92ec7138cf7572_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1.0e-05 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|end_of_text|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 67e38e0f-12cd-4211-887d-8de51deebf53 wandb_project: Gradients-On-Seven wandb_run: your_name wandb_runid: 67e38e0f-12cd-4211-887d-8de51deebf53 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 7aedbf19-a23b-47b8-962a-e68a0b99f974 This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.668 | 0.0004 | 1 | 1.6352 | | 1.4327 | 0.0178 | 50 | 1.4679 | | 1.2533 | 0.0356 | 100 | 1.4296 | | 1.307 | 0.0534 | 150 | 1.4044 | | 1.3704 | 0.0712 | 200 | 1.3699 | | 1.2814 | 0.0890 | 250 | 1.3450 | | 1.2309 | 0.1068 | 300 | 1.3351 | | 1.4034 | 0.1246 | 350 | 1.3253 | | 1.0771 | 0.1423 | 400 | 1.3225 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
batrider32/19901feb-73ba-4890-9ce9-8f18f0909469
batrider32
2025-02-05T19:17:59Z
23
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Capybara-7B-V1", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1", "license:mit", "region:us" ]
null
2025-02-05T17:39:04Z
--- library_name: peft license: mit base_model: NousResearch/Nous-Capybara-7B-V1 tags: - axolotl - generated_from_trainer model-index: - name: 19901feb-73ba-4890-9ce9-8f18f0909469 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Capybara-7B-V1 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 01df04c66daac7c4_train_data.json ds_type: json format: custom path: /workspace/input_data/01df04c66daac7c4_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: batrider32/19901feb-73ba-4890-9ce9-8f18f0909469 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/01df04c66daac7c4_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 261dc9c8-0266-4ccb-9c77-747c8c7940df wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 261dc9c8-0266-4ccb-9c77-747c8c7940df warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 19901feb-73ba-4890-9ce9-8f18f0909469 This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9502 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0957 | 0.0012 | 1 | 1.3756 | | 2.1256 | 0.0587 | 50 | 1.0982 | | 1.2466 | 0.1173 | 100 | 1.0683 | | 1.0757 | 0.1760 | 150 | 1.0130 | | 1.5361 | 0.2346 | 200 | 0.9759 | | 1.2814 | 0.2933 | 250 | 0.9589 | | 1.1099 | 0.3519 | 300 | 0.9540 | | 1.3106 | 0.4106 | 350 | 0.9523 | | 1.5379 | 0.4692 | 400 | 0.9502 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/bcc606c9-111d-4661-9907-9ecf40d73010
lesso
2025-02-05T19:17:37Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:04:47Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: bcc606c9-111d-4661-9907-9ecf40d73010 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - dc27bb03be0c3289_train_data.json ds_type: json format: custom path: /workspace/input_data/dc27bb03be0c3289_train_data.json type: field_input: publication_year field_instruction: document_id field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/bcc606c9-111d-4661-9907-9ecf40d73010 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001009 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/dc27bb03be0c3289_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f132fdaf-6691-46fe-89c9-cf4fb177c93f wandb_project: new-09 wandb_run: your_name wandb_runid: f132fdaf-6691-46fe-89c9-cf4fb177c93f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bcc606c9-111d-4661-9907-9ecf40d73010 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001009 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8126 | 0.0002 | 1 | 1.8536 | | 1.2032 | 0.0078 | 50 | 1.3815 | | 0.881 | 0.0155 | 100 | 1.2984 | | 0.7993 | 0.0233 | 150 | 1.2536 | | 0.8849 | 0.0310 | 200 | 1.2338 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
CapOfDeathHD/Brian-lora
CapOfDeathHD
2025-02-05T19:17:34Z
19
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-05T18:57:17Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Brian --- # Brian Lora <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Brian` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('CapOfDeathHD/Brian-lora', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
philip-hightech/5a70a3fc-9483-4ffe-908b-e1df6273d8ca
philip-hightech
2025-02-05T19:17:01Z
9
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:07:32Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 5a70a3fc-9483-4ffe-908b-e1df6273d8ca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 5a70a3fc-9483-4ffe-908b-e1df6273d8ca This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9897 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
daniel40/2f612a8f-339a-47d1-986b-b08297f96701
daniel40
2025-02-05T19:09:50Z
9
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T19:05:38Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 2f612a8f-339a-47d1-986b-b08297f96701 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 2f612a8f-339a-47d1-986b-b08297f96701 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1347 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ztjona/RoBERTa-finetuned-NewsQA
ztjona
2025-02-05T19:09:29Z
8
0
null
[ "safetensors", "roberta", "question-answering", "base_model:deepset/roberta-base-squad2", "base_model:finetune:deepset/roberta-base-squad2", "region:us" ]
question-answering
2025-02-05T18:14:16Z
--- base_model: - deepset/roberta-base-squad2 pipeline_tag: question-answering ---
aleegis12/6b4ef7b4-a596-444b-a988-53631621b9fe
aleegis12
2025-02-05T19:07:55Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M-Instruct", "base_model:adapter:unsloth/SmolLM-360M-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:47:52Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 6b4ef7b4-a596-444b-a988-53631621b9fe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-360M-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 989befd2b2c62411_train_data.json ds_type: json format: custom path: /workspace/input_data/989befd2b2c62411_train_data.json type: field_instruction: instruction field_output: completion format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: aleegis12/6b4ef7b4-a596-444b-a988-53631621b9fe hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/989befd2b2c62411_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 01d5cee7-b6ae-4cb3-899f-b160b6e7be7e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 01d5cee7-b6ae-4cb3-899f-b160b6e7be7e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 6b4ef7b4-a596-444b-a988-53631621b9fe This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3722 | 0.0007 | 1 | 1.6021 | | 1.4454 | 0.0338 | 50 | 1.4633 | | 1.5911 | 0.0676 | 100 | 1.4290 | | 1.3813 | 0.1015 | 150 | 1.4142 | | 1.3488 | 0.1353 | 200 | 1.4043 | | 1.4796 | 0.1691 | 250 | 1.3977 | | 1.3492 | 0.2029 | 300 | 1.3946 | | 1.1977 | 0.2367 | 350 | 1.3927 | | 1.3685 | 0.2705 | 400 | 1.3922 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
jaspionjader/kosmos-evaa-immersive-mix-v45.1-8B-Q5_K_M-GGUF
jaspionjader
2025-02-05T19:07:18Z
23
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B", "base_model:quantized:jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-05T16:33:48Z
--- base_model: jaspionjader/bh-59 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # jaspionjader/bh-59-Q5_K_M-GGUF This model was converted to GGUF format from [`jaspionjader/bh-59`](https://huggingface.co/jaspionjader/bh-59) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jaspionjader/bh-59) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jaspionjader/bh-59-Q5_K_M-GGUF --hf-file bh-59-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jaspionjader/bh-59-Q5_K_M-GGUF --hf-file bh-59-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jaspionjader/bh-59-Q5_K_M-GGUF --hf-file bh-59-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jaspionjader/bh-59-Q5_K_M-GGUF --hf-file bh-59-q5_k_m.gguf -c 2048 ```
mudler/LocalAI-Llama3-8b-Function-Call-v0.2
mudler
2025-02-05T19:06:10Z
13
9
transformers
[ "transformers", "safetensors", "llama", "text-generation", "LocalAI", "conversational", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T22:58:36Z
--- license: llama3 tags: - LocalAI --- # LocalAI-Llama3-8b-Function-Call-v0.2 **NEW!!!** **Check the latest model series: https://huggingface.co/mudler/LocalAI-functioncall-phi-4-v0.3** [![local-ai-banner.png](https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/bXvNcxQqQ-wNAnISmx3PS.png)](https://localai.io) ![LocalAIFCALL](https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/us5JKi9z046p8K-cn_M0w.webp) OpenVINO: https://huggingface.co/fakezeta/LocalAI-Llama3-8b-Function-Call-v0.2-ov-int8 GGUF: https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF This model is a fine-tune on a custom dataset + glaive to work specifically and leverage all the [LocalAI](https://localai.io) features of constrained grammar. Specifically, the model once enters in tools mode will always reply with JSON. To run on LocalAI: ``` local-ai run huggingface://mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF/localai.yaml ``` If you like my work, consider up donating so can get resources for my fine-tunes!
prxy5604/0b5d07ad-c29b-4ff5-82fc-312b6d2e2bb3
prxy5604
2025-02-05T19:03:14Z
6
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-02-05T18:42:15Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 0b5d07ad-c29b-4ff5-82fc-312b6d2e2bb3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: microsoft/Phi-3-mini-4k-instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ade6f8887ef47607_train_data.json ds_type: json format: custom path: /workspace/input_data/ade6f8887ef47607_train_data.json type: field_input: source field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5604/0b5d07ad-c29b-4ff5-82fc-312b6d2e2bb3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/ade6f8887ef47607_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 51a32343-8514-49b5-a560-105ff57d734c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 51a32343-8514-49b5-a560-105ff57d734c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 0b5d07ad-c29b-4ff5-82fc-312b6d2e2bb3 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1409 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 233 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9898 | 0.0043 | 1 | 2.3164 | | 0.9905 | 0.2148 | 50 | 0.3576 | | 0.5947 | 0.4296 | 100 | 0.2357 | | 1.1925 | 0.6445 | 150 | 0.1890 | | 0.653 | 0.8593 | 200 | 0.1409 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/cc970f84-ce72-4983-912c-027ca50fca13
lesso
2025-02-05T19:00:24Z
9
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:40:58Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: cc970f84-ce72-4983-912c-027ca50fca13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-0.5B-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - c5d67a74690b3d5f_train_data.json ds_type: json format: custom path: /workspace/input_data/c5d67a74690b3d5f_train_data.json type: field_instruction: article field_output: summary format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/cc970f84-ce72-4983-912c-027ca50fca13 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001013 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/c5d67a74690b3d5f_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: dd09a61a-80c4-45ae-bf4d-ae6d4d538ef6 wandb_project: new-13 wandb_run: your_name wandb_runid: dd09a61a-80c4-45ae-bf4d-ae6d4d538ef6 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cc970f84-ce72-4983-912c-027ca50fca13 This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001013 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6529 | 0.0001 | 1 | 2.0680 | | 2.0315 | 0.0048 | 50 | 1.4365 | | 1.2339 | 0.0096 | 100 | 1.3180 | | 1.6336 | 0.0143 | 150 | 1.2486 | | 1.1329 | 0.0191 | 200 | 1.2226 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/3a0bef58-2913-4581-9dc9-0434999a8e5c
lesso
2025-02-05T18:59:11Z
9
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-02-05T18:49:55Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 3a0bef58-2913-4581-9dc9-0434999a8e5c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: microsoft/Phi-3-mini-4k-instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - ade6f8887ef47607_train_data.json ds_type: json format: custom path: /workspace/input_data/ade6f8887ef47607_train_data.json type: field_input: source field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/3a0bef58-2913-4581-9dc9-0434999a8e5c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001012 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/ade6f8887ef47607_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 51a32343-8514-49b5-a560-105ff57d734c wandb_project: new-12 wandb_run: your_name wandb_runid: 51a32343-8514-49b5-a560-105ff57d734c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3a0bef58-2913-4581-9dc9-0434999a8e5c This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001012 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1093 | 0.0011 | 1 | 2.3134 | | 0.4128 | 0.0537 | 50 | 0.4416 | | 0.5868 | 0.1074 | 100 | 0.2723 | | 0.5566 | 0.1611 | 150 | 0.2216 | | 0.3863 | 0.2148 | 200 | 0.1808 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
9bz/Qwen2.5-Coder-V2-1.5B-Instruct-bnb-4bit
9bz
2025-02-05T18:56:08Z
8
0
peft
[ "peft", "safetensors", "qwen2", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit", "base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-02-03T21:55:13Z
--- base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
lesso/effa5c65-52bb-4f99-809b-10a03d50716f
lesso
2025-02-05T18:52:59Z
8
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-7b-it", "base_model:adapter:unsloth/gemma-7b-it", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:37:15Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: effa5c65-52bb-4f99-809b-10a03d50716f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-7b-it bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 0d61c4d326aec423_train_data.json ds_type: json format: custom path: /workspace/input_data/0d61c4d326aec423_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/effa5c65-52bb-4f99-809b-10a03d50716f hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001007 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/0d61c4d326aec423_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: cee608f7-df8b-4215-af6b-efb9570e3439 wandb_project: new-07 wandb_run: your_name wandb_runid: cee608f7-df8b-4215-af6b-efb9570e3439 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # effa5c65-52bb-4f99-809b-10a03d50716f This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001007 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3829 | 0.0016 | 1 | 11.1763 | | 4.3199 | 0.0794 | 50 | 5.1568 | | 4.1148 | 0.1589 | 100 | 4.4907 | | 3.8645 | 0.2383 | 150 | 4.1308 | | 4.192 | 0.3177 | 200 | 3.9226 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Loyola/llama3-hr-instruct-reasoning4096
Loyola
2025-02-05T18:51:16Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-04T13:32:34Z
--- base_model: meta-llama/Llama-3.1-8B-Instruct library_name: transformers model_name: llama3-hr-instruct-reasoning4096 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama3-hr-instruct-reasoning4096 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Loyola/llama3-hr-instruct-reasoning4096", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.14.0 - Transformers: 4.48.2 - Pytorch: 2.0.1+cu117 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
daniel40/705d48ab-d417-4603-a30e-cf6c9459a462
daniel40
2025-02-05T18:50:02Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:40:33Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 705d48ab-d417-4603-a30e-cf6c9459a462 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 705d48ab-d417-4603-a30e-cf6c9459a462 This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
0x1202/1998f405-2777-427f-b89b-adedf115aad1
0x1202
2025-02-05T18:49:50Z
6
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-02-05T18:19:49Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 1998f405-2777-427f-b89b-adedf115aad1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: microsoft/Phi-3-mini-4k-instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ade6f8887ef47607_train_data.json ds_type: json format: custom path: /workspace/input_data/ade6f8887ef47607_train_data.json type: field_input: source field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: 0x1202/1998f405-2777-427f-b89b-adedf115aad1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/ade6f8887ef47607_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 51a32343-8514-49b5-a560-105ff57d734c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 51a32343-8514-49b5-a560-105ff57d734c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1998f405-2777-427f-b89b-adedf115aad1 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 233 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9882 | 0.0043 | 1 | 2.3134 | | 1.2304 | 0.2148 | 50 | 0.3496 | | 0.6048 | 0.4296 | 100 | 0.2366 | | 1.055 | 0.6445 | 150 | 0.1867 | | 0.6489 | 0.8593 | 200 | 0.1405 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
abenius/b70c17a1-595a-4b6b-8f94-d175f5e20383
abenius
2025-02-05T18:47:24Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-02-05T18:20:52Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: b70c17a1-595a-4b6b-8f94-d175f5e20383 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4a0f187d0b523501_train_data.json ds_type: json format: custom path: /workspace/input_data/4a0f187d0b523501_train_data.json type: field_input: title field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: true hub_model_id: abenius/b70c17a1-595a-4b6b-8f94-d175f5e20383 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 500 micro_batch_size: 2 mlflow_experiment_name: /tmp/4a0f187d0b523501_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: ee9aace3-5889-4a15-94df-c5f659d02b95 wandb_project: Gradients-On-12 wandb_run: your_name wandb_runid: ee9aace3-5889-4a15-94df-c5f659d02b95 warmup_steps: 5 weight_decay: 0.01 xformers_attention: null ``` </details><br> # b70c17a1-595a-4b6b-8f94-d175f5e20383 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.0686 | 0.1207 | 500 | 3.0211 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
aleegis12/51afe6c5-048a-462f-bd7a-c2661909bd51
aleegis12
2025-02-05T18:46:43Z
22
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Capybara-7B-V1", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1", "license:mit", "region:us" ]
null
2025-02-05T17:39:01Z
--- library_name: peft license: mit base_model: NousResearch/Nous-Capybara-7B-V1 tags: - axolotl - generated_from_trainer model-index: - name: 51afe6c5-048a-462f-bd7a-c2661909bd51 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Capybara-7B-V1 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 01df04c66daac7c4_train_data.json ds_type: json format: custom path: /workspace/input_data/01df04c66daac7c4_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: aleegis12/51afe6c5-048a-462f-bd7a-c2661909bd51 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/01df04c66daac7c4_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 261dc9c8-0266-4ccb-9c77-747c8c7940df wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 261dc9c8-0266-4ccb-9c77-747c8c7940df warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 51afe6c5-048a-462f-bd7a-c2661909bd51 This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9496 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0958 | 0.0012 | 1 | 1.3757 | | 2.1332 | 0.0587 | 50 | 1.0971 | | 1.2401 | 0.1173 | 100 | 1.0666 | | 1.0739 | 0.1760 | 150 | 1.0148 | | 1.5451 | 0.2346 | 200 | 0.9758 | | 1.2816 | 0.2933 | 250 | 0.9586 | | 1.1032 | 0.3519 | 300 | 0.9532 | | 1.3024 | 0.4106 | 350 | 0.9517 | | 1.5417 | 0.4692 | 400 | 0.9496 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
baby-dev/d1abcc53-49fd-4dae-91f6-a295cbbf7e37
baby-dev
2025-02-05T18:45:52Z
8
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-02-05T18:41:37Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: d1abcc53-49fd-4dae-91f6-a295cbbf7e37 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # d1abcc53-49fd-4dae-91f6-a295cbbf7e37 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Proutland/EliasLora
Proutland
2025-02-05T18:44:14Z
28
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-05T18:13:48Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Elias --- # Eliaslora <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Elias` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Proutland/EliasLora', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
lesso/3d634eef-8e1b-4476-ab79-a1dd26550cf1
lesso
2025-02-05T18:43:31Z
8
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-7b-it", "base_model:adapter:unsloth/gemma-7b-it", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:28:22Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: 3d634eef-8e1b-4476-ab79-a1dd26550cf1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-7b-it bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 0d61c4d326aec423_train_data.json ds_type: json format: custom path: /workspace/input_data/0d61c4d326aec423_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/3d634eef-8e1b-4476-ab79-a1dd26550cf1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001009 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/0d61c4d326aec423_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: cee608f7-df8b-4215-af6b-efb9570e3439 wandb_project: new-09 wandb_run: your_name wandb_runid: cee608f7-df8b-4215-af6b-efb9570e3439 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3d634eef-8e1b-4476-ab79-a1dd26550cf1 This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001009 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3829 | 0.0016 | 1 | 11.1763 | | 4.3025 | 0.0794 | 50 | 5.2693 | | 4.2761 | 0.1589 | 100 | 4.5026 | | 3.8028 | 0.2383 | 150 | 4.1340 | | 4.2792 | 0.3177 | 200 | 3.9188 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Xiaojian9992024/Qwen2.5-THREADRIPPER-Small-Q8_0-GGUF
Xiaojian9992024
2025-02-05T18:43:30Z
22
2
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Xiaojian9992024/Qwen2.5-THREADRIPPER-Small", "base_model:quantized:Xiaojian9992024/Qwen2.5-THREADRIPPER-Small", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-05T18:42:55Z
--- base_model: Xiaojian9992024/Qwen2.5-THREADRIPPER-Small library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Xiaojian9992024/Qwen2.5-THREADRIPPER-Small-Q8_0-GGUF This model was converted to GGUF format from [`Xiaojian9992024/Qwen2.5-THREADRIPPER-Small`](https://huggingface.co/Xiaojian9992024/Qwen2.5-THREADRIPPER-Small) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Xiaojian9992024/Qwen2.5-THREADRIPPER-Small) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Xiaojian9992024/Qwen2.5-THREADRIPPER-Small-Q8_0-GGUF --hf-file qwen2.5-threadripper-small-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Xiaojian9992024/Qwen2.5-THREADRIPPER-Small-Q8_0-GGUF --hf-file qwen2.5-threadripper-small-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Xiaojian9992024/Qwen2.5-THREADRIPPER-Small-Q8_0-GGUF --hf-file qwen2.5-threadripper-small-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Xiaojian9992024/Qwen2.5-THREADRIPPER-Small-Q8_0-GGUF --hf-file qwen2.5-threadripper-small-q8_0.gguf -c 2048 ```
lesso/5736c651-61bc-4e1e-9eda-bf179e5360e1
lesso
2025-02-05T18:43:30Z
8
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-7b-it", "base_model:adapter:unsloth/gemma-7b-it", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:28:23Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: 5736c651-61bc-4e1e-9eda-bf179e5360e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-7b-it bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 0d61c4d326aec423_train_data.json ds_type: json format: custom path: /workspace/input_data/0d61c4d326aec423_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/5736c651-61bc-4e1e-9eda-bf179e5360e1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000101 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/0d61c4d326aec423_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: cee608f7-df8b-4215-af6b-efb9570e3439 wandb_project: new-10 wandb_run: your_name wandb_runid: cee608f7-df8b-4215-af6b-efb9570e3439 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5736c651-61bc-4e1e-9eda-bf179e5360e1 This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9120 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000101 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3829 | 0.0016 | 1 | 11.1763 | | 4.2759 | 0.0794 | 50 | 5.2627 | | 4.114 | 0.1589 | 100 | 4.5129 | | 3.8961 | 0.2383 | 150 | 4.1255 | | 4.1414 | 0.3177 | 200 | 3.9120 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
great0001/d411a6b0-d565-4e8d-9496-b03a53d54714
great0001
2025-02-05T18:39:31Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M-Instruct", "base_model:adapter:unsloth/SmolLM-360M-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:28:42Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: d411a6b0-d565-4e8d-9496-b03a53d54714 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # d411a6b0-d565-4e8d-9496-b03a53d54714 This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3678 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mergekit-community/Llama-3-ThinkRoleplay-DeepSeek-R1-Distill-8B-abliterated
mergekit-community
2025-02-05T18:39:25Z
10
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:Azazelle/Llama-3-8B-contaminated-roleplay", "base_model:merge:Azazelle/Llama-3-8B-contaminated-roleplay", "base_model:huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated", "base_model:merge:huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-05T18:36:14Z
--- base_model: - huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated - Azazelle/Llama-3-8B-contaminated-roleplay library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Azazelle/Llama-3-8B-contaminated-roleplay](https://huggingface.co/Azazelle/Llama-3-8B-contaminated-roleplay) as a base. ### Models Merged The following models were included in the merge: * [huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Azazelle/Llama-3-8B-contaminated-roleplay # No parameters necessary for base model - model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: Azazelle/Llama-3-8B-contaminated-roleplay parameters: int8_mask: true dtype: bfloat16 random_seed: 42 ```
gocrawford/jenncraw
gocrawford
2025-02-05T18:39:24Z
51
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-05T18:15:15Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: JennCraw --- # Jenncraw <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `JennCraw` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('gocrawford/jenncraw', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
prxy5604/79c5b813-a321-4660-b37b-ec80ee61966a
prxy5604
2025-02-05T18:38:34Z
8
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:22:28Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: 79c5b813-a321-4660-b37b-ec80ee61966a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dltjdgh0928/test_instruction bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 42fc11e553659da0_train_data.json ds_type: json format: custom path: /workspace/input_data/42fc11e553659da0_train_data.json type: field_instruction: fo field_output: da format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5604/79c5b813-a321-4660-b37b-ec80ee61966a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/42fc11e553659da0_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3d1485e9-d9bb-448e-8152-9952bf30509f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3d1485e9-d9bb-448e-8152-9952bf30509f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 79c5b813-a321-4660-b37b-ec80ee61966a This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 113 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.6804 | 0.0089 | 1 | 2.7847 | | 3.7077 | 0.4435 | 50 | 0.8856 | | 3.0738 | 0.8869 | 100 | 0.7423 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
tscstudios/kvj8gjldpiyswqpppnwofmig8512_c88a9979-8181-48d7-b9bf-c7c5623f3fcc
tscstudios
2025-02-05T18:38:07Z
9
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-05T18:38:05Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Kvj8Gjldpiyswqpppnwofmig8512_C88A9979 8181 48D7 B9Bf C7C5623F3Fcc <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tscstudios/kvj8gjldpiyswqpppnwofmig8512_c88a9979-8181-48d7-b9bf-c7c5623f3fcc', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
tamewild/test_v2_merged
tamewild
2025-02-05T18:37:41Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/phi-4", "base_model:finetune:unsloth/phi-4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-02-05T18:32:07Z
--- base_model: unsloth/Phi-4 tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** tamewild - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-4 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
baby-dev/00337868-1173-4472-b18c-bb9c15c218de
baby-dev
2025-02-05T18:37:36Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M-Instruct", "base_model:adapter:unsloth/SmolLM-360M-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:31:48Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 00337868-1173-4472-b18c-bb9c15c218de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 00337868-1173-4472-b18c-bb9c15c218de This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ErrorAI/95c8be77-ebae-4ba8-9422-04257843ea0a
ErrorAI
2025-02-05T18:35:52Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T17:32:43Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Coder-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 95c8be77-ebae-4ba8-9422-04257843ea0a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Coder-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 7d4a0b73d911aed1_train_data.json ds_type: json format: custom path: /workspace/input_data/7d4a0b73d911aed1_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: ErrorAI/95c8be77-ebae-4ba8-9422-04257843ea0a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 1563 micro_batch_size: 4 mlflow_experiment_name: /tmp/7d4a0b73d911aed1_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4371b52f-2e36-4bbe-b8f7-866206dd99f7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 4371b52f-2e36-4bbe-b8f7-866206dd99f7 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 95c8be77-ebae-4ba8-9422-04257843ea0a This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7676 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 773 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1013 | 1.0 | 773 | 1.7676 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/98adcb71-f1dd-4c89-a6e9-518f70440276
lesso
2025-02-05T18:35:21Z
10
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b-it", "base_model:adapter:unsloth/codegemma-7b-it", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:00:06Z
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: 98adcb71-f1dd-4c89-a6e9-518f70440276 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codegemma-7b-it bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - e2c8d7e566dbbff7_train_data.json ds_type: json format: custom path: /workspace/input_data/e2c8d7e566dbbff7_train_data.json type: field_instruction: Content field_output: Summary format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/98adcb71-f1dd-4c89-a6e9-518f70440276 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001012 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/e2c8d7e566dbbff7_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: cebf462e-97f3-4d04-9673-f518514a3a43 wandb_project: new-12 wandb_run: your_name wandb_runid: cebf462e-97f3-4d04-9673-f518514a3a43 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 98adcb71-f1dd-4c89-a6e9-518f70440276 This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001012 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7581 | 0.0003 | 1 | 1.2526 | | 0.7727 | 0.0157 | 50 | 0.5893 | | 0.5583 | 0.0315 | 100 | 0.5384 | | 0.4782 | 0.0472 | 150 | 0.4917 | | 0.5438 | 0.0629 | 200 | 0.4707 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robiulawaldev/dd838ff8-99b2-4d3b-bd2e-adff43587744
robiulawaldev
2025-02-05T18:34:15Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M-Instruct", "base_model:adapter:unsloth/SmolLM-360M-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:28:44Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: dd838ff8-99b2-4d3b-bd2e-adff43587744 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # dd838ff8-99b2-4d3b-bd2e-adff43587744 This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/37ff49b1-7a8d-497d-b987-3a39395c7fae
mrferr3t
2025-02-05T18:32:06Z
6
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-02-05T18:22:29Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 37ff49b1-7a8d-497d-b987-3a39395c7fae results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora auto_find_batch_size: true base_model: microsoft/Phi-3-mini-4k-instruct bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - ade6f8887ef47607_train_data.json ds_type: json format: custom path: /workspace/input_data/ade6f8887ef47607_train_data.json type: field_input: source field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 3 early_stopping_threshold: 0.001 eval_max_new_tokens: 128 eval_steps: 40 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/37ff49b1-7a8d-497d-b987-3a39395c7fae hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0003 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 100 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine micro_batch_size: 32 mlflow_experiment_name: /tmp/ade6f8887ef47607_train_data.json model_type: AutoModelForCausalLM num_epochs: 50 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true s2_attention: null sample_packing: false save_steps: 40 saves_per_epoch: 0 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.02 wandb_entity: null wandb_mode: online wandb_name: 51a32343-8514-49b5-a560-105ff57d734c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 51a32343-8514-49b5-a560-105ff57d734c warmup_ratio: 0.05 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 37ff49b1-7a8d-497d-b987-3a39395c7fae This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 300 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0010 | 1 | 0.9059 | | No log | 0.0416 | 40 | 0.6723 | | No log | 0.0833 | 80 | 0.3300 | | 1.2944 | 0.1249 | 120 | 0.2327 | | 1.2944 | 0.1666 | 160 | 0.1953 | | 0.4461 | 0.2082 | 200 | 0.1795 | | 0.4461 | 0.2499 | 240 | 0.1596 | | 0.4461 | 0.2915 | 280 | 0.1495 | | 0.3172 | 0.3332 | 320 | 0.1446 | | 0.3172 | 0.3748 | 360 | 0.1309 | | 0.3091 | 0.4164 | 400 | 0.1307 | | 0.3091 | 0.4581 | 440 | 0.1199 | | 0.3091 | 0.4997 | 480 | 0.1221 | | 0.2537 | 0.5414 | 520 | 0.1126 | | 0.2537 | 0.5830 | 560 | 0.1156 | | 0.2476 | 0.6247 | 600 | 0.1073 | | 0.2476 | 0.6663 | 640 | 0.0984 | | 0.2476 | 0.7080 | 680 | 0.1069 | | 0.2129 | 0.7496 | 720 | 0.1014 | | 0.2129 | 0.7913 | 760 | 0.0912 | | 0.1875 | 0.8329 | 800 | 0.0920 | | 0.1875 | 0.8745 | 840 | 0.0915 | | 0.1875 | 0.9162 | 880 | 0.0922 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
antimage88/693f8f58-1733-4d0d-8449-cd2a9fe5f38b
antimage88
2025-02-05T18:31:18Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-1.7B-Instruct", "base_model:adapter:unsloth/SmolLM2-1.7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:04:44Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-1.7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 693f8f58-1733-4d0d-8449-cd2a9fe5f38b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-1.7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5f212321670d5d4c_train_data.json ds_type: json format: custom path: /workspace/input_data/5f212321670d5d4c_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: antimage88/693f8f58-1733-4d0d-8449-cd2a9fe5f38b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/5f212321670d5d4c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 986966cb-3c4c-4b10-a74c-6315b04fc713 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 986966cb-3c4c-4b10-a74c-6315b04fc713 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 693f8f58-1733-4d0d-8449-cd2a9fe5f38b This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2953 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 337 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.6052 | 0.0030 | 1 | 1.8155 | | 1.6604 | 0.1484 | 50 | 1.4918 | | 1.4906 | 0.2967 | 100 | 1.4147 | | 1.4451 | 0.4451 | 150 | 1.3642 | | 1.3923 | 0.5935 | 200 | 1.3278 | | 1.4482 | 0.7418 | 250 | 1.3078 | | 1.4791 | 0.8902 | 300 | 1.2953 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
cimol/de3a36ea-3e96-4c55-b6f2-17cfdf156163
cimol
2025-02-05T18:30:44Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Hermes-3-Llama-3.1-8B", "base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B", "region:us" ]
null
2025-02-05T16:27:13Z
--- library_name: peft base_model: unsloth/Hermes-3-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: de3a36ea-3e96-4c55-b6f2-17cfdf156163 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Hermes-3-Llama-3.1-8B bf16: true chat_template: llama3 data_processes: 24 dataset_prepared_path: null datasets: - data_files: - cb0718283a3dcb0e_train_data.json ds_type: json format: custom path: /workspace/input_data/cb0718283a3dcb0e_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 4 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: cimol/de3a36ea-3e96-4c55-b6f2-17cfdf156163 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 7.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.04 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine lr_scheduler_warmup_steps: 50 max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/cb0718283a3dcb0e_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-8 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 17333 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer total_train_batch_size: 32 train_batch_size: 8 train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c00bf7ed-9f09-4b70-bef5-b35166416a69 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c00bf7ed-9f09-4b70-bef5-b35166416a69 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # de3a36ea-3e96-4c55-b6f2-17cfdf156163 This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5936 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 17333 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5195 | 0.0001 | 1 | 1.1938 | | 1.1077 | 0.0055 | 50 | 0.7179 | | 0.9501 | 0.0109 | 100 | 0.6286 | | 1.2016 | 0.0164 | 150 | 0.5962 | | 1.0451 | 0.0219 | 200 | 0.5936 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/98455502-4a39-4c6c-8d00-86ec8e2e3861
lesso
2025-02-05T18:29:33Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "license:other", "region:us" ]
null
2025-02-05T18:13:51Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-3B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 98455502-4a39-4c6c-8d00-86ec8e2e3861 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-3B-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 7c168a5ba6db1084_train_data.json ds_type: json format: custom path: /workspace/input_data/7c168a5ba6db1084_train_data.json type: field_instruction: topic field_output: argument format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/98455502-4a39-4c6c-8d00-86ec8e2e3861 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001007 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/7c168a5ba6db1084_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: da9bd6ad-9ce3-4a9a-8e0a-ba7f1337fc43 wandb_project: new-07 wandb_run: your_name wandb_runid: da9bd6ad-9ce3-4a9a-8e0a-ba7f1337fc43 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 98455502-4a39-4c6c-8d00-86ec8e2e3861 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001007 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.4817 | 0.0003 | 1 | 4.7725 | | 2.604 | 0.0139 | 50 | 2.7295 | | 2.2677 | 0.0278 | 100 | 2.4914 | | 2.2242 | 0.0417 | 150 | 2.4384 | | 2.3808 | 0.0556 | 200 | 2.3639 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
SupremoUGH/image-classification-model
SupremoUGH
2025-02-05T18:25:06Z
16
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "vision", "en", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-02-05T17:02:26Z
--- language: en tags: - image-classification - vision model-index: - name: ViT Image Classification Model sources: - https://huggingface.co/SupremoUGH/image-classification-model results: - task: name: image-classification type: image-classification metrics: - name: Accuracy value: 98.0% type: float library_name: transformers license: mit --- # Image Classification Model (ViT) This is an image classification model based on **Vision Transformer (ViT)**, fine-tuned on the **MNIST** dataset. The model is designed to classify images into one of 10 possible classes (digits 0-9). The code is compatible with Hugging Face's inference providers and can be easily deployed. ## Model Details - **Model Type**: Vision Transformer (ViT) - **Base Model**: `google/vit-base-patch16-224` - **Task**: Image Classification - **Dataset**: MNIST (handwritten digits) - **Labels**: 10 classes (0-9) ## How to Use ### Install Requirements Make sure you have the following dependencies installed: ```bash pip3 install requirements.txt ``` ### Run unit tests ```bash python3 -m unittest discover -s tests ```
MinaMila/mistral_instbase_GermanCredit_5ep_42
MinaMila
2025-02-05T18:24:55Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.3", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-02-05T16:00:06Z
--- base_model: unsloth/mistral-7b-instruct-v0.3 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** MinaMila - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lesso/36f00761-830b-4664-8818-778d0b9d1645
lesso
2025-02-05T18:22:12Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-3-8b-Instruct", "base_model:adapter:unsloth/llama-3-8b-Instruct", "license:llama3", "region:us" ]
null
2025-02-05T18:06:05Z
--- library_name: peft license: llama3 base_model: unsloth/llama-3-8b-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 36f00761-830b-4664-8818-778d0b9d1645 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/llama-3-8b-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 0864faa44b3c224c_train_data.json ds_type: json format: custom path: /workspace/input_data/0864faa44b3c224c_train_data.json type: field_instruction: label field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/36f00761-830b-4664-8818-778d0b9d1645 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001011 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/0864faa44b3c224c_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 96ca7e7b-aee8-496c-876a-57ed5d8cbfd1 wandb_project: new-11 wandb_run: your_name wandb_runid: 96ca7e7b-aee8-496c-876a-57ed5d8cbfd1 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 36f00761-830b-4664-8818-778d0b9d1645 This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9678 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001011 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3647 | 0.0015 | 1 | 2.5788 | | 2.4899 | 0.0726 | 50 | 2.1420 | | 1.7869 | 0.1451 | 100 | 2.0448 | | 1.8 | 0.2177 | 150 | 1.9863 | | 2.1935 | 0.2903 | 200 | 1.9678 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
baby-dev/1d85a9cc-ef78-4cc9-933b-b83dd9e3c9ca
baby-dev
2025-02-05T18:20:42Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "license:other", "region:us" ]
null
2025-02-05T18:13:57Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-3B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 1d85a9cc-ef78-4cc9-933b-b83dd9e3c9ca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 1d85a9cc-ef78-4cc9-933b-b83dd9e3c9ca This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Faitlesses/23
Faitlesses
2025-02-05T18:20:04Z
168
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-02-05T18:19:40Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/698D67D5-4AAB-4C86-B5E8-904E03DA8CC3.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # 56 <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Faitlesses/23/tree/main) them in the Files & versions tab.
tscstudios/kvj8gjldpiyswqpppnwofmig8512_0b21941e-3c0f-4cb1-92f5-263aa983dafe
tscstudios
2025-02-05T18:19:47Z
9
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-05T18:19:45Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Kvj8Gjldpiyswqpppnwofmig8512_0B21941E 3C0F 4Cb1 92F5 263Aa983Dafe <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tscstudios/kvj8gjldpiyswqpppnwofmig8512_0b21941e-3c0f-4cb1-92f5-263aa983dafe', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
jebish7/llama3.2-full
jebish7
2025-02-05T18:17:41Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-02-05T17:12:10Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dimasik2987/70935a03-5775-49fd-87c7-32902a2f5212
dimasik2987
2025-02-05T18:13:51Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-3-8b-Instruct", "base_model:adapter:unsloth/llama-3-8b-Instruct", "license:llama3", "region:us" ]
null
2025-02-05T17:58:23Z
--- library_name: peft license: llama3 base_model: unsloth/llama-3-8b-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 70935a03-5775-49fd-87c7-32902a2f5212 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/llama-3-8b-Instruct bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 0864faa44b3c224c_train_data.json ds_type: json format: custom path: /workspace/input_data/0864faa44b3c224c_train_data.json type: field_instruction: label field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: dimasik2987/70935a03-5775-49fd-87c7-32902a2f5212 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001004 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/0864faa44b3c224c_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 96ca7e7b-aee8-496c-876a-57ed5d8cbfd1 wandb_project: cold6 wandb_run: your_name wandb_runid: 96ca7e7b-aee8-496c-876a-57ed5d8cbfd1 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 70935a03-5775-49fd-87c7-32902a2f5212 This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001004 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3427 | 0.0029 | 1 | 2.5774 | | 2.1071 | 0.1451 | 50 | 2.0482 | | 2.0946 | 0.2903 | 100 | 1.9842 | | 1.9532 | 0.4354 | 150 | 1.9557 | | 1.6823 | 0.5806 | 200 | 1.9391 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/1d5be5f0-8615-46db-92bd-f4ce372092c5
lesso
2025-02-05T18:12:48Z
7
0
peft
[ "peft", "safetensors", "dbrx", "axolotl", "generated_from_trainer", "base_model:katuni4ka/tiny-random-dbrx", "base_model:adapter:katuni4ka/tiny-random-dbrx", "region:us" ]
null
2025-02-05T17:58:04Z
--- library_name: peft base_model: katuni4ka/tiny-random-dbrx tags: - axolotl - generated_from_trainer model-index: - name: 1d5be5f0-8615-46db-92bd-f4ce372092c5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: katuni4ka/tiny-random-dbrx bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 0a30047102b434d7_train_data.json ds_type: json format: custom path: /workspace/input_data/0a30047102b434d7_train_data.json type: field_instruction: query field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/1d5be5f0-8615-46db-92bd-f4ce372092c5 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.00010017 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/0a30047102b434d7_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0be2cc61-00d1-4153-9960-910acde865cc wandb_project: new-17 wandb_run: your_name wandb_runid: 0be2cc61-00d1-4153-9960-910acde865cc warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1d5be5f0-8615-46db-92bd-f4ce372092c5 This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset. It achieves the following results on the evaluation set: - Loss: 11.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00010017 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 23.0 | 0.0000 | 1 | 11.5 | | 23.0 | 0.0018 | 50 | 11.5 | | 23.0 | 0.0036 | 100 | 11.5 | | 23.0 | 0.0055 | 150 | 11.5 | | 23.0 | 0.0073 | 200 | 11.5 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nathanialhunt/7201cc97-b8f6-44c7-8f66-a3c4370ad5bb
nathanialhunt
2025-02-05T18:11:55Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-1.7B-Instruct", "base_model:adapter:unsloth/SmolLM2-1.7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:05:41Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-1.7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 7201cc97-b8f6-44c7-8f66-a3c4370ad5bb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 7201cc97-b8f6-44c7-8f66-a3c4370ad5bb This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adammandic87/d5947c3a-0c74-4628-9fae-67ed78069bd1
adammandic87
2025-02-05T18:09:29Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-1.7B-Instruct", "base_model:adapter:unsloth/SmolLM2-1.7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:05:36Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-1.7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: d5947c3a-0c74-4628-9fae-67ed78069bd1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # d5947c3a-0c74-4628-9fae-67ed78069bd1 This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
baby-dev/bb909f28-e00a-4002-8ddf-35c21fa2c2cb
baby-dev
2025-02-05T18:08:55Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-1.7B-Instruct", "base_model:adapter:unsloth/SmolLM2-1.7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-05T18:05:03Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-1.7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: bb909f28-e00a-4002-8ddf-35c21fa2c2cb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # bb909f28-e00a-4002-8ddf-35c21fa2c2cb This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nbninh/ef588566-fb36-4db1-9172-ad4a87ddfed0
nbninh
2025-02-05T18:08:51Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
2025-02-05T16:52:41Z
--- library_name: peft license: other base_model: Qwen/Qwen1.5-7B tags: - axolotl - generated_from_trainer model-index: - name: ef588566-fb36-4db1-9172-ad4a87ddfed0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen1.5-7B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e5b56e23ba3d3819_train_data.json ds_type: json format: custom path: /workspace/input_data/e5b56e23ba3d3819_train_data.json type: field_input: '' field_instruction: repo_name field_output: target format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nbninh/ef588566-fb36-4db1-9172-ad4a87ddfed0 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 2 mlflow_experiment_name: /tmp/e5b56e23ba3d3819_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ee954201-708b-4a7e-a2f6-25ec08ffedb2 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ee954201-708b-4a7e-a2f6-25ec08ffedb2 warmup_steps: 50 weight_decay: 0.01 xformers_attention: true ``` </details><br> # ef588566-fb36-4db1-9172-ad4a87ddfed0 This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 285 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9044 | 1.0 | 285 | 1.0017 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shibajustfor/c97c201d-bd18-4efb-8fb1-79b3b4b7f01e
shibajustfor
2025-02-05T18:08:40Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-3-8b-Instruct", "base_model:adapter:unsloth/llama-3-8b-Instruct", "license:llama3", "region:us" ]
null
2025-02-05T18:02:21Z
--- library_name: peft license: llama3 base_model: unsloth/llama-3-8b-Instruct tags: - axolotl - generated_from_trainer model-index: - name: c97c201d-bd18-4efb-8fb1-79b3b4b7f01e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # c97c201d-bd18-4efb-8fb1-79b3b4b7f01e This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/9a4450ba-7c32-4c32-b126-26b472e74be9
mrferr3t
2025-02-05T18:06:45Z
22
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Capybara-7B-V1", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1", "license:mit", "region:us" ]
null
2025-02-05T17:42:18Z
--- library_name: peft license: mit base_model: NousResearch/Nous-Capybara-7B-V1 tags: - axolotl - generated_from_trainer model-index: - name: 9a4450ba-7c32-4c32-b126-26b472e74be9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora auto_find_batch_size: true base_model: NousResearch/Nous-Capybara-7B-V1 bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - 01df04c66daac7c4_train_data.json ds_type: json format: custom path: /workspace/input_data/01df04c66daac7c4_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 3 early_stopping_threshold: 0.001 eval_max_new_tokens: 128 eval_steps: 40 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/9a4450ba-7c32-4c32-b126-26b472e74be9 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0003 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 100 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine micro_batch_size: 32 mlflow_experiment_name: /tmp/01df04c66daac7c4_train_data.json model_type: AutoModelForCausalLM num_epochs: 50 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true s2_attention: null sample_packing: false save_steps: 40 saves_per_epoch: 0 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.02 wandb_entity: null wandb_mode: online wandb_name: 261dc9c8-0266-4ccb-9c77-747c8c7940df wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 261dc9c8-0266-4ccb-9c77-747c8c7940df warmup_ratio: 0.05 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9a4450ba-7c32-4c32-b126-26b472e74be9 This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1099 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.2723 | | No log | 0.0114 | 40 | 1.2370 | | No log | 0.0227 | 80 | 1.0504 | | 1.164 | 0.0341 | 120 | 0.9932 | | 1.164 | 0.0455 | 160 | 0.9648 | | 1.0065 | 0.0569 | 200 | 0.9527 | | 1.0065 | 0.0682 | 240 | 0.9454 | | 1.0065 | 0.0796 | 280 | 0.9404 | | 0.9572 | 0.0910 | 320 | 0.9369 | | 0.9572 | 0.1024 | 360 | 0.9351 | | 0.9375 | 0.1137 | 400 | 0.9329 | | 0.9375 | 0.1251 | 440 | 0.9306 | | 0.9375 | 0.1365 | 480 | 0.9277 | | 0.924 | 0.1479 | 520 | 0.9279 | | 0.924 | 0.1592 | 560 | 0.9268 | | 0.9345 | 0.1706 | 600 | 0.9266 | | 0.9345 | 0.1820 | 640 | 0.9256 | | 0.9345 | 0.1933 | 680 | 0.9271 | | 0.9338 | 0.2047 | 720 | 0.9239 | | 0.9338 | 0.2161 | 760 | 0.9229 | | 0.9254 | 0.2275 | 800 | 0.9212 | | 0.9254 | 0.2388 | 840 | 0.9205 | | 0.9254 | 0.2502 | 880 | 0.9200 | | 0.9243 | 0.2616 | 920 | 0.9201 | | 0.9243 | 0.2730 | 960 | 0.9214 | | 0.9212 | 0.2843 | 1000 | 0.9215 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/2cbbc550-3122-4230-8aa8-a83b2350f748
lesso
2025-02-05T18:02:17Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-02-05T18:00:52Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: 2cbbc550-3122-4230-8aa8-a83b2350f748 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - e89f96913218f8de_train_data.json ds_type: json format: custom path: /workspace/input_data/e89f96913218f8de_train_data.json type: field_input: intent field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/2cbbc550-3122-4230-8aa8-a83b2350f748 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001004 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/e89f96913218f8de_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a5f829a5-85bb-442f-a490-e8bbfde3b08d wandb_project: new-04 wandb_run: your_name wandb_runid: a5f829a5-85bb-442f-a490-e8bbfde3b08d warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 2cbbc550-3122-4230-8aa8-a83b2350f748 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3110 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001004 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3744 | 0.0002 | 1 | 10.3783 | | 10.347 | 0.0109 | 50 | 10.3466 | | 10.3309 | 0.0217 | 100 | 10.3301 | | 10.3147 | 0.0326 | 150 | 10.3154 | | 10.3093 | 0.0434 | 200 | 10.3110 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Best000/fbb8e6cd-8b54-49b4-a720-f73b86915043
Best000
2025-02-05T18:00:43Z
6
0
peft
[ "peft", "safetensors", "dbrx", "axolotl", "generated_from_trainer", "base_model:katuni4ka/tiny-random-dbrx", "base_model:adapter:katuni4ka/tiny-random-dbrx", "region:us" ]
null
2025-02-05T17:58:30Z
--- library_name: peft base_model: katuni4ka/tiny-random-dbrx tags: - axolotl - generated_from_trainer model-index: - name: fbb8e6cd-8b54-49b4-a720-f73b86915043 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # fbb8e6cd-8b54-49b4-a720-f73b86915043 This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset. It achieves the following results on the evaluation set: - Loss: 11.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/61415241-5c2d-45f8-bdbe-25c730a21e06
lesso
2025-02-05T18:00:19Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T17:53:06Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 61415241-5c2d-45f8-bdbe-25c730a21e06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 4a0f187d0b523501_train_data.json ds_type: json format: custom path: /workspace/input_data/4a0f187d0b523501_train_data.json type: field_input: title field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/61415241-5c2d-45f8-bdbe-25c730a21e06 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001004 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/4a0f187d0b523501_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ee9aace3-5889-4a15-94df-c5f659d02b95 wandb_project: new-04 wandb_run: your_name wandb_runid: ee9aace3-5889-4a15-94df-c5f659d02b95 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 61415241-5c2d-45f8-bdbe-25c730a21e06 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001004 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8475 | 0.0002 | 1 | 5.1437 | | 3.4459 | 0.0121 | 50 | 3.3723 | | 2.6063 | 0.0241 | 100 | 3.1282 | | 3.7746 | 0.0362 | 150 | 3.0617 | | 2.3282 | 0.0483 | 200 | 3.0396 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
0x1202/f9896282-8f4f-4d92-a0e1-3698ee72ab87
0x1202
2025-02-05T17:58:10Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "license:other", "region:us" ]
null
2025-02-05T17:10:34Z
--- library_name: peft license: other base_model: Qwen/Qwen1.5-7B tags: - axolotl - generated_from_trainer model-index: - name: f9896282-8f4f-4d92-a0e1-3698ee72ab87 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen1.5-7B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e5b56e23ba3d3819_train_data.json ds_type: json format: custom path: /workspace/input_data/e5b56e23ba3d3819_train_data.json type: field_input: '' field_instruction: repo_name field_output: target format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: 0x1202/f9896282-8f4f-4d92-a0e1-3698ee72ab87 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 400 micro_batch_size: 8 mlflow_experiment_name: /tmp/e5b56e23ba3d3819_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ee954201-708b-4a7e-a2f6-25ec08ffedb2 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ee954201-708b-4a7e-a2f6-25ec08ffedb2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f9896282-8f4f-4d92-a0e1-3698ee72ab87 This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 143 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9784 | 0.0070 | 1 | 1.1685 | | 0.946 | 0.3509 | 50 | 1.0318 | | 1.0628 | 0.7018 | 100 | 1.0181 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shibajustfor/729854bc-aaa2-4bbc-b7e9-63c50cc4722e
shibajustfor
2025-02-05T17:54:52Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T17:51:16Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 729854bc-aaa2-4bbc-b7e9-63c50cc4722e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 729854bc-aaa2-4bbc-b7e9-63c50cc4722e This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/TinyAlpaca-i1-GGUF
mradermacher
2025-02-05T17:54:39Z
358
0
transformers
[ "transformers", "gguf", "en", "base_model:mlabonne/TinyAlpaca", "base_model:quantized:mlabonne/TinyAlpaca", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-02-05T17:02:40Z
--- base_model: mlabonne/TinyAlpaca language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/mlabonne/TinyAlpaca <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/TinyAlpaca-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q2_K.gguf) | i1-Q2_K | 0.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ3_S.gguf) | i1-IQ3_S | 0.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ3_M.gguf) | i1-IQ3_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q4_0.gguf) | i1-Q4_0 | 0.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q4_1.gguf) | i1-Q4_1 | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/TinyAlpaca-i1-GGUF/resolve/main/TinyAlpaca.i1-Q6_K.gguf) | i1-Q6_K | 1.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
daniel40/e41791f8-5135-4de5-9729-f4a6c04143df
daniel40
2025-02-05T17:54:11Z
22
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Capybara-7B-V1", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1", "license:mit", "region:us" ]
null
2025-02-05T17:40:23Z
--- library_name: peft license: mit base_model: NousResearch/Nous-Capybara-7B-V1 tags: - axolotl - generated_from_trainer model-index: - name: e41791f8-5135-4de5-9729-f4a6c04143df results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # e41791f8-5135-4de5-9729-f4a6c04143df This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0446 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/e39b3b25-94d3-4040-b0cd-069ca4afcb3c
lesso
2025-02-05T17:53:07Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T17:46:03Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: e39b3b25-94d3-4040-b0cd-069ca4afcb3c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 4a0f187d0b523501_train_data.json ds_type: json format: custom path: /workspace/input_data/4a0f187d0b523501_train_data.json type: field_input: title field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/e39b3b25-94d3-4040-b0cd-069ca4afcb3c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001013 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/4a0f187d0b523501_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ee9aace3-5889-4a15-94df-c5f659d02b95 wandb_project: new-13 wandb_run: your_name wandb_runid: ee9aace3-5889-4a15-94df-c5f659d02b95 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e39b3b25-94d3-4040-b0cd-069ca4afcb3c This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001013 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8475 | 0.0002 | 1 | 5.1437 | | 3.4066 | 0.0121 | 50 | 3.3781 | | 2.6165 | 0.0241 | 100 | 3.1314 | | 3.7548 | 0.0362 | 150 | 3.0649 | | 2.337 | 0.0483 | 200 | 3.0407 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
botenius/2c581266-f14f-4a62-b8df-ee5fce706110
botenius
2025-02-05T17:52:54Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
2025-02-05T16:52:29Z
--- library_name: peft license: other base_model: Qwen/Qwen1.5-7B tags: - axolotl - generated_from_trainer model-index: - name: 2c581266-f14f-4a62-b8df-ee5fce706110 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen1.5-7B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e5b56e23ba3d3819_train_data.json ds_type: json format: custom path: /workspace/input_data/e5b56e23ba3d3819_train_data.json type: field_input: '' field_instruction: repo_name field_output: target format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: true hub_model_id: botenius/2c581266-f14f-4a62-b8df-ee5fce706110 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 500 micro_batch_size: 2 mlflow_experiment_name: /tmp/e5b56e23ba3d3819_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: ee954201-708b-4a7e-a2f6-25ec08ffedb2 wandb_project: Gradients-On-13 wandb_run: your_name wandb_runid: ee954201-708b-4a7e-a2f6-25ec08ffedb2 warmup_steps: 5 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 2c581266-f14f-4a62-b8df-ee5fce706110 This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8928 | 0.8772 | 500 | 1.0218 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso/b0b54be9-2837-409f-9c19-abd09d2ca4b5
lesso
2025-02-05T17:52:52Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T17:45:50Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: b0b54be9-2837-409f-9c19-abd09d2ca4b5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-0.5B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 4a0f187d0b523501_train_data.json ds_type: json format: custom path: /workspace/input_data/4a0f187d0b523501_train_data.json type: field_input: title field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: true hub_model_id: lesso/b0b54be9-2837-409f-9c19-abd09d2ca4b5 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001009 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: linear max_grad_norm: 1.0 max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/G.O.D/4a0f187d0b523501_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ee9aace3-5889-4a15-94df-c5f659d02b95 wandb_project: new-09 wandb_run: your_name wandb_runid: ee9aace3-5889-4a15-94df-c5f659d02b95 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b0b54be9-2837-409f-9c19-abd09d2ca4b5 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001009 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8475 | 0.0002 | 1 | 5.1437 | | 3.4738 | 0.0121 | 50 | 3.3735 | | 2.5911 | 0.0241 | 100 | 3.1307 | | 3.765 | 0.0362 | 150 | 3.0617 | | 2.3597 | 0.0483 | 200 | 3.0391 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
baby-dev/cdc3a99f-933e-4d62-83ce-2e5d9f17245e
baby-dev
2025-02-05T17:50:36Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T17:46:23Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: cdc3a99f-933e-4d62-83ce-2e5d9f17245e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # cdc3a99f-933e-4d62-83ce-2e5d9f17245e This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
havinash-ai/823e64e3-4bef-4b54-afd2-6eeb76ac5646
havinash-ai
2025-02-05T17:49:43Z
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-02-05T17:48:01Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: 823e64e3-4bef-4b54-afd2-6eeb76ac5646 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 823e64e3-4bef-4b54-afd2-6eeb76ac5646 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.2275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shibajustfor/5f33a8d5-b6f4-464c-b4b4-2cab1c169b62
shibajustfor
2025-02-05T17:49:33Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
2025-02-05T17:45:57Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 5f33a8d5-b6f4-464c-b4b4-2cab1c169b62 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 5f33a8d5-b6f4-464c-b4b4-2cab1c169b62 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1