modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-26 18:27:55
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
499 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-26 18:27:32
card
stringlengths
11
1.01M
Triangle104/DeepSeek-R1-Distill-Qwen-14B-uncensored-Q4_K_M-GGUF
Triangle104
2025-01-29T09:27:33Z
2,777
1
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:thirdeyeai/DeepSeek-R1-Distill-Qwen-14B-uncensored", "base_model:quantized:thirdeyeai/DeepSeek-R1-Distill-Qwen-14B-uncensored", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-29T09:26:55Z
--- library_name: transformers license: mit base_model: thirdeyeai/DeepSeek-R1-Distill-Qwen-14B-uncensored tags: - llama-cpp - gguf-my-repo --- # Triangle104/DeepSeek-R1-Distill-Qwen-14B-uncensored-Q4_K_M-GGUF This model was converted to GGUF format from [`thirdeyeai/DeepSeek-R1-Distill-Qwen-14B-uncensored`](https://huggingface.co/thirdeyeai/DeepSeek-R1-Distill-Qwen-14B-uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/thirdeyeai/DeepSeek-R1-Distill-Qwen-14B-uncensored) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/DeepSeek-R1-Distill-Qwen-14B-uncensored-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-uncensored-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/DeepSeek-R1-Distill-Qwen-14B-uncensored-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-uncensored-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/DeepSeek-R1-Distill-Qwen-14B-uncensored-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-uncensored-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/DeepSeek-R1-Distill-Qwen-14B-uncensored-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-uncensored-q4_k_m.gguf -c 2048 ```
roleplaiapp/Chocolatine-2-14B-Instruct-v2.0b2-i1-Q3_K_M-GGUF
roleplaiapp
2025-01-29T09:26:10Z
13
0
transformers
[ "transformers", "gguf", "14b", "3-bit", "Q3_K_M", "chocolatine", "instruct", "llama-cpp", "text-generation", "v20b2", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-29T09:25:41Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 3-bit - Q3_K_M - chocolatine - gguf - instruct - llama-cpp - text-generation - v20b2 --- # roleplaiapp/Chocolatine-2-14B-Instruct-v2.0b2-i1-Q3_K_M-GGUF **Repo:** `roleplaiapp/Chocolatine-2-14B-Instruct-v2.0b2-i1-Q3_K_M-GGUF` **Original Model:** `Chocolatine-2-14B-Instruct-v2.0b2-i1` **Quantized File:** `Chocolatine-2-14B-Instruct-v2.0b2.i1-Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of Chocolatine-2-14B-Instruct-v2.0b2-i1 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
laquythang/aa8b8206-5ea3-43b2-9a05-409e12f7645a
laquythang
2025-01-29T09:24:21Z
10
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-1.5B", "base_model:adapter:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T09:06:51Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B tags: - axolotl - generated_from_trainer model-index: - name: aa8b8206-5ea3-43b2-9a05-409e12f7645a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d8bb17718bb8d883_train_data.json ds_type: json format: custom path: /workspace/input_data/d8bb17718bb8d883_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: laquythang/aa8b8206-5ea3-43b2-9a05-409e12f7645a hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/d8bb17718bb8d883_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8669d303-67c9-4c29-bce8-03e81b1074bc wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8669d303-67c9-4c29-bce8-03e81b1074bc warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # aa8b8206-5ea3-43b2-9a05-409e12f7645a This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.805 | 0.0325 | 200 | 1.9308 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
VitoCorleone72/MBB
VitoCorleone72
2025-01-29T09:22:19Z
78
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-01-29T09:22:09Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/ComfyUI_00044_.jpeg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # MBB <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/VitoCorleone72/MBB/tree/main) them in the Files & versions tab.
kk-aivio/f264cf5c-fda0-49fc-8665-a7f7d352b8a7
kk-aivio
2025-01-29T09:20:59Z
10
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "region:us" ]
null
2025-01-29T09:18:32Z
--- library_name: peft base_model: lmsys/vicuna-7b-v1.3 tags: - axolotl - generated_from_trainer model-index: - name: f264cf5c-fda0-49fc-8665-a7f7d352b8a7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-7b-v1.3 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f7da12f378f99980_train_data.json ds_type: json format: custom path: /workspace/input_data/f7da12f378f99980_train_data.json type: field_input: domain.suggestion field_instruction: source-text field_output: target-text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kk-aivio/f264cf5c-fda0-49fc-8665-a7f7d352b8a7 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/f7da12f378f99980_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1d200228-f16b-4061-914f-7f934da68e0f wandb_project: Birthday-SN56-17-Gradients-On-Demand wandb_run: your_name wandb_runid: 1d200228-f16b-4061-914f-7f934da68e0f warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f264cf5c-fda0-49fc-8665-a7f7d352b8a7 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0044 | 1 | 1.3869 | | 1.1191 | 0.0577 | 13 | 0.5260 | | 0.4626 | 0.1154 | 26 | 0.2275 | | 0.235 | 0.1731 | 39 | 0.1704 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nttx/e90e230e-78bb-4a4d-9522-46d0065957d5
nttx
2025-01-29T09:20:15Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "region:us" ]
null
2025-01-29T09:12:45Z
--- library_name: peft base_model: lmsys/vicuna-7b-v1.3 tags: - axolotl - generated_from_trainer model-index: - name: e90e230e-78bb-4a4d-9522-46d0065957d5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-7b-v1.3 bf16: auto chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - f7da12f378f99980_train_data.json ds_type: json format: custom path: /workspace/input_data/f7da12f378f99980_train_data.json type: field_input: domain.suggestion field_instruction: source-text field_output: target-text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/e90e230e-78bb-4a4d-9522-46d0065957d5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/f7da12f378f99980_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1d200228-f16b-4061-914f-7f934da68e0f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 1d200228-f16b-4061-914f-7f934da68e0f warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e90e230e-78bb-4a4d-9522-46d0065957d5 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 113 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0668 | 0.9933 | 112 | 0.0877 | | 0.1373 | 1.0067 | 113 | 0.0867 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
haryoaw/cola_meta-llama-Llama-3.1-8B_2_0.70
haryoaw
2025-01-29T09:17:22Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T09:11:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
robiual-awal/cef3dfe8-e4a6-439c-aa64-ed79e26c6da6
robiual-awal
2025-01-29T09:16:23Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "region:us" ]
null
2025-01-29T09:15:09Z
--- library_name: peft base_model: lmsys/vicuna-7b-v1.3 tags: - axolotl - generated_from_trainer model-index: - name: cef3dfe8-e4a6-439c-aa64-ed79e26c6da6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-7b-v1.3 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f7da12f378f99980_train_data.json ds_type: json format: custom path: /workspace/input_data/f7da12f378f99980_train_data.json type: field_input: domain.suggestion field_instruction: source-text field_output: target-text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: robiual-awal/cef3dfe8-e4a6-439c-aa64-ed79e26c6da6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/f7da12f378f99980_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1d200228-f16b-4061-914f-7f934da68e0f wandb_project: Birthday-SN56-29-Gradients-On-Demand wandb_run: your_name wandb_runid: 1d200228-f16b-4061-914f-7f934da68e0f warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cef3dfe8-e4a6-439c-aa64-ed79e26c6da6 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0044 | 1 | 1.3869 | | 1.1182 | 0.0577 | 13 | 0.5201 | | 0.4584 | 0.1154 | 26 | 0.2276 | | 0.2362 | 0.1731 | 39 | 0.1702 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Best000/cfd6030b-977a-41c5-9d5d-749789ebea26
Best000
2025-01-29T09:16:22Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "region:us" ]
null
2025-01-29T09:15:10Z
--- library_name: peft base_model: lmsys/vicuna-7b-v1.3 tags: - axolotl - generated_from_trainer model-index: - name: cfd6030b-977a-41c5-9d5d-749789ebea26 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-7b-v1.3 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f7da12f378f99980_train_data.json ds_type: json format: custom path: /workspace/input_data/f7da12f378f99980_train_data.json type: field_input: domain.suggestion field_instruction: source-text field_output: target-text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/cfd6030b-977a-41c5-9d5d-749789ebea26 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/f7da12f378f99980_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1d200228-f16b-4061-914f-7f934da68e0f wandb_project: Birthday-SN56-32-Gradients-On-Demand wandb_run: your_name wandb_runid: 1d200228-f16b-4061-914f-7f934da68e0f warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cfd6030b-977a-41c5-9d5d-749789ebea26 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0044 | 1 | 1.3869 | | 1.2645 | 0.0577 | 13 | 1.3274 | | 1.1873 | 0.1154 | 26 | 0.6595 | | 0.6479 | 0.1731 | 39 | 0.2504 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Sayan01/Phi2-CoT
Sayan01
2025-01-29T09:15:38Z
15
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T09:12:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
robiulawaldev/93d5c78c-7d35-46a6-9f48-9a2707a6c657
robiulawaldev
2025-01-29T09:11:40Z
9
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-1.5B", "base_model:adapter:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "region:us" ]
null
2025-01-29T09:06:54Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 93d5c78c-7d35-46a6-9f48-9a2707a6c657 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d8bb17718bb8d883_train_data.json ds_type: json format: custom path: /workspace/input_data/d8bb17718bb8d883_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/93d5c78c-7d35-46a6-9f48-9a2707a6c657 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: constant max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/d8bb17718bb8d883_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8669d303-67c9-4c29-bce8-03e81b1074bc wandb_project: Birthday-SN56-35-Gradients-On-Demand wandb_run: your_name wandb_runid: 8669d303-67c9-4c29-bce8-03e81b1074bc warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 93d5c78c-7d35-46a6-9f48-9a2707a6c657 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 2.1405 | | 2.0139 | 0.0011 | 13 | 2.0509 | | 2.199 | 0.0021 | 26 | 2.0078 | | 2.0619 | 0.0032 | 39 | 1.9835 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robiual-awal/e54becd8-0ac2-43c4-b75f-8b24f850d3f2
robiual-awal
2025-01-29T09:11:37Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-1.5B", "base_model:adapter:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "region:us" ]
null
2025-01-29T09:06:57Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B tags: - axolotl - generated_from_trainer model-index: - name: e54becd8-0ac2-43c4-b75f-8b24f850d3f2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d8bb17718bb8d883_train_data.json ds_type: json format: custom path: /workspace/input_data/d8bb17718bb8d883_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: robiual-awal/e54becd8-0ac2-43c4-b75f-8b24f850d3f2 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/d8bb17718bb8d883_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8669d303-67c9-4c29-bce8-03e81b1074bc wandb_project: Birthday-SN56-29-Gradients-On-Demand wandb_run: your_name wandb_runid: 8669d303-67c9-4c29-bce8-03e81b1074bc warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e54becd8-0ac2-43c4-b75f-8b24f850d3f2 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 2.1451 | | 2.1347 | 0.0021 | 13 | 2.0522 | | 2.0046 | 0.0042 | 26 | 2.0003 | | 1.8344 | 0.0063 | 39 | 1.9836 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ajku2199/Llama-2-7b-hf_process_prob6_dataset2_n1000_seed42_epochs1_batch8_qlora
ajku2199
2025-01-29T09:11:30Z
11
0
peft
[ "peft", "safetensors", "region:us" ]
null
2025-01-17T12:26:04Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
asaporta/speecht5_finetuned_voxpopuli_nl
asaporta
2025-01-29T09:09:28Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:facebook/voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2025-01-29T08:35:54Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.1944 | 4.3098 | 1000 | 0.4859 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
abaddon182/b6c91000-987d-4baa-ab95-97d06fff8b7c
abaddon182
2025-01-29T09:07:46Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "region:us" ]
null
2025-01-29T09:00:56Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: b6c91000-987d-4baa-ab95-97d06fff8b7c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 155f72bf61c52f9c_train_data.json ds_type: json format: custom path: /workspace/input_data/155f72bf61c52f9c_train_data.json type: field_input: title_main field_instruction: texte field_output: texteHtml format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: abaddon182/b6c91000-987d-4baa-ab95-97d06fff8b7c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/155f72bf61c52f9c_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d46de064-6529-4c08-8755-e14ca536003f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d46de064-6529-4c08-8755-e14ca536003f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b6c91000-987d-4baa-ab95-97d06fff8b7c This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3066 | 0.0071 | 1 | 0.6051 | | 0.1238 | 0.3534 | 50 | 0.1174 | | 0.0768 | 0.7067 | 100 | 0.0884 | | 0.0754 | 1.0618 | 150 | 0.0832 | | 0.0723 | 1.4152 | 200 | 0.0798 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_HICS_v4
concept-unlearning
2025-01-29T09:07:44Z
14
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T09:03:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VitoCorleone72/HellyR
VitoCorleone72
2025-01-29T09:06:43Z
256
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-01-29T09:06:40Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/00010-2711735610.jpeg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # HellyR <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/VitoCorleone72/HellyR/tree/main) them in the Files & versions tab.
kostiantynk-out/3e4ff2a0-7699-4ea2-9ccf-99cbd415d314
kostiantynk-out
2025-01-29T09:05:21Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-13b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-13b-hf-flash", "region:us" ]
null
2025-01-29T08:59:40Z
--- library_name: peft base_model: NousResearch/CodeLlama-13b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: 3e4ff2a0-7699-4ea2-9ccf-99cbd415d314 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-13b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3875808def965efa_train_data.json ds_type: json format: custom path: /workspace/input_data/3875808def965efa_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk-out/3e4ff2a0-7699-4ea2-9ccf-99cbd415d314 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/3875808def965efa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 29f091f6-5131-4ec0-8ff6-d9601393bcfa wandb_project: Mine-SN56-1-Gradients-On-Demand wandb_run: your_name wandb_runid: 29f091f6-5131-4ec0-8ff6-d9601393bcfa warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3e4ff2a0-7699-4ea2-9ccf-99cbd415d314 This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 1.7422 | | 3.2376 | 0.0021 | 13 | 1.3009 | | 2.6122 | 0.0041 | 26 | 1.1098 | | 2.2253 | 0.0062 | 39 | 1.0416 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Dans-DiscountModels/12b-mn-dans-sakurakaze-RC
Dans-DiscountModels
2025-01-29T09:03:56Z
5
0
peft
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:PocketDoc/Dans-PersonalityEngine-V1.1.0-12b", "base_model:adapter:PocketDoc/Dans-PersonalityEngine-V1.1.0-12b", "region:us" ]
null
2025-01-29T06:13:22Z
--- base_model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
ajku2199/Llama-2-7b-hf_process_prob6_dataset1_n1000_seed1_epochs1_batch8_qlora
ajku2199
2025-01-29T09:03:52Z
14
0
peft
[ "peft", "safetensors", "region:us" ]
null
2025-01-17T12:24:51Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
nttx/f64df5bd-d226-4aee-b97b-2f0a599aa61e
nttx
2025-01-29T09:02:42Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "region:us" ]
null
2025-01-29T08:58:58Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: f64df5bd-d226-4aee-b97b-2f0a599aa61e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-360M bf16: auto chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 155f72bf61c52f9c_train_data.json ds_type: json format: custom path: /workspace/input_data/155f72bf61c52f9c_train_data.json type: field_input: title_main field_instruction: texte field_output: texteHtml format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/f64df5bd-d226-4aee-b97b-2f0a599aa61e hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/155f72bf61c52f9c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d46de064-6529-4c08-8755-e14ca536003f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d46de064-6529-4c08-8755-e14ca536003f warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f64df5bd-d226-4aee-b97b-2f0a599aa61e This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1346 | 0.7067 | 200 | 0.1270 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Aleteian/Saiga-Unleashed-Q6_K-GGUF
Aleteian
2025-01-29T09:02:34Z
46
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Aleteian/Saiga-Unleashed", "base_model:quantized:Aleteian/Saiga-Unleashed", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-29T09:01:50Z
--- base_model: Aleteian/Saiga-Unleashed library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Aleteian/Saiga-Unleashed-Q6_K-GGUF This model was converted to GGUF format from [`Aleteian/Saiga-Unleashed`](https://huggingface.co/Aleteian/Saiga-Unleashed) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Aleteian/Saiga-Unleashed) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Aleteian/Saiga-Unleashed-Q6_K-GGUF --hf-file saiga-unleashed-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Aleteian/Saiga-Unleashed-Q6_K-GGUF --hf-file saiga-unleashed-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Aleteian/Saiga-Unleashed-Q6_K-GGUF --hf-file saiga-unleashed-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Aleteian/Saiga-Unleashed-Q6_K-GGUF --hf-file saiga-unleashed-q6_k.gguf -c 2048 ```
diaenra/0066a809-86ed-4161-9de4-ac0d76594f28
diaenra
2025-01-29T09:01:14Z
10
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-29T07:20:28Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 0066a809-86ed-4161-9de4-ac0d76594f28 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-0.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 775410f20973b41e_train_data.json ds_type: json format: custom path: /workspace/input_data/775410f20973b41e_train_data.json type: field_input: rejected field_instruction: prompt field_output: chosen format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: diaenra/0066a809-86ed-4161-9de4-ac0d76594f28 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 70GB micro_batch_size: 4 mlflow_experiment_name: /tmp/775410f20973b41e_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: diaenra-tao-miner wandb_mode: online wandb_name: 2a9c3890-5cf7-4888-91af-b81ebd4af89f wandb_project: tao wandb_run: diaenra wandb_runid: 2a9c3890-5cf7-4888-91af-b81ebd4af89f warmup_steps: 10 weight_decay: 0.0 xformers_attention: true ``` </details><br> # 0066a809-86ed-4161-9de4-ac0d76594f28 This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1411 | 0.9995 | 1181 | 2.0531 | | 2.069 | 1.9993 | 2362 | 2.0289 | | 1.9514 | 2.9990 | 3543 | 2.0251 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
gavrilstep/91599405-f6a0-4776-b22a-8d954758316b
gavrilstep
2025-01-29T09:00:59Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T08:58:35Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: 91599405-f6a0-4776-b22a-8d954758316b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-360M bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 155f72bf61c52f9c_train_data.json ds_type: json format: custom path: /workspace/input_data/155f72bf61c52f9c_train_data.json type: field_input: title_main field_instruction: texte field_output: texteHtml format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: gavrilstep/91599405-f6a0-4776-b22a-8d954758316b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 39 micro_batch_size: 2 mlflow_experiment_name: /tmp/155f72bf61c52f9c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 21 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d46de064-6529-4c08-8755-e14ca536003f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d46de064-6529-4c08-8755-e14ca536003f warmup_steps: 21 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 91599405-f6a0-4776-b22a-8d954758316b This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 21 - training_steps: 39 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0018 | 1 | nan | | 0.0 | 0.0088 | 5 | nan | | 0.0 | 0.0177 | 10 | nan | | 0.0 | 0.0265 | 15 | nan | | 0.0 | 0.0354 | 20 | nan | | 0.0 | 0.0442 | 25 | nan | | 0.0 | 0.0530 | 30 | nan | | 0.0 | 0.0619 | 35 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF
mradermacher
2025-01-29T09:00:13Z
647
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:DoppelReflEx/MN-12B-Mimicore-GreenSnake", "base_model:quantized:DoppelReflEx/MN-12B-Mimicore-GreenSnake", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-01-29T03:23:42Z
--- base_model: DoppelReflEx/MN-12B-Mimicore-GreenSnake language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-GreenSnake <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-GreenSnake-i1-GGUF/resolve/main/MN-12B-Mimicore-GreenSnake.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
AMindToThink/gemma-2-2b_RMU_cyber-forget-corpus_s100_a100_layer3
AMindToThink
2025-01-29T08:56:46Z
6
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T08:24:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ajku2199/Llama-2-7b-hf_process_prob6_dataset2_n1000_seed1_epochs1_batch8_qlora
ajku2199
2025-01-29T08:56:11Z
13
0
peft
[ "peft", "safetensors", "region:us" ]
null
2025-01-17T12:23:35Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
great0001/737ddcdc-9119-4e0c-95f3-c6b6ef84eeac
great0001
2025-01-29T08:56:10Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-29T08:53:16Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 737ddcdc-9119-4e0c-95f3-c6b6ef84eeac results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-0.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 775410f20973b41e_train_data.json ds_type: json format: custom path: /workspace/input_data/775410f20973b41e_train_data.json type: field_input: rejected field_instruction: prompt field_output: chosen format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/737ddcdc-9119-4e0c-95f3-c6b6ef84eeac hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/775410f20973b41e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2a9c3890-5cf7-4888-91af-b81ebd4af89f wandb_project: Mine-SN56-20-Gradients-On-Demand wandb_run: your_name wandb_runid: 2a9c3890-5cf7-4888-91af-b81ebd4af89f warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 737ddcdc-9119-4e0c-95f3-c6b6ef84eeac This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 2.7100 | | 2.6986 | 0.0014 | 13 | 2.4579 | | 2.5092 | 0.0028 | 26 | 2.3734 | | 2.4255 | 0.0041 | 39 | 2.3296 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
graceyun/dreambooth-sdxl-0.2
graceyun
2025-01-29T08:55:41Z
6
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-01-29T07:43:06Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a pixel art icon in the style of NES games, 16-bit graphics, on a transparent background widget: - text: a pixel art icon of a brown dog, NES style, 16-bit graphics, on a transparent background output: url: image_0.png - text: a pixel art icon of a brown dog, NES style, 16-bit graphics, on a transparent background output: url: image_1.png - text: a pixel art icon of a brown dog, NES style, 16-bit graphics, on a transparent background output: url: image_2.png - text: a pixel art icon of a brown dog, NES style, 16-bit graphics, on a transparent background output: url: image_3.png tags: - text-to-image - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - graceyun/dreambooth-sdxl-0.2 <Gallery /> ## Model description These are graceyun/dreambooth-sdxl-0.2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a pixel art icon in the style of NES games, 16-bit graphics, on a transparent background to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](graceyun/dreambooth-sdxl-0.2/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ethansolomon/bert-finetuned-squad
ethansolomon
2025-01-29T08:55:14Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2025-01-29T03:05:05Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Tokenizers 0.21.0
duyphu/ba4c982e-9fec-404c-b0de-24e67acf7fa5
duyphu
2025-01-29T08:55:02Z
5
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B", "license:apache-2.0", "region:us" ]
null
2025-01-29T07:57:19Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B tags: - axolotl - generated_from_trainer model-index: - name: ba4c982e-9fec-404c-b0de-24e67acf7fa5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3e822cd8df57cb11_train_data.json ds_type: json format: custom path: /workspace/input_data/3e822cd8df57cb11_train_data.json type: field_input: context field_instruction: question field_output: long_answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: duyphu/ba4c982e-9fec-404c-b0de-24e67acf7fa5 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/3e822cd8df57cb11_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c78acf73-ff92-4184-944e-ea8cd1f207da wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c78acf73-ff92-4184-944e-ea8cd1f207da warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # ba4c982e-9fec-404c-b0de-24e67acf7fa5 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | nan | | 0.0 | 0.0004 | 10 | nan | | 0.0 | 0.0008 | 20 | nan | | 0.0 | 0.0012 | 30 | nan | | 0.0 | 0.0016 | 40 | nan | | 0.0 | 0.0020 | 50 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
prxy5604/767d570b-6fb9-412e-a0bb-613b3a65ea62
prxy5604
2025-01-29T08:54:30Z
8
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "license:apache-2.0", "region:us" ]
null
2025-01-29T07:51:21Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO tags: - axolotl - generated_from_trainer model-index: - name: 767d570b-6fb9-412e-a0bb-613b3a65ea62 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - f04259c91cb5f8b9_train_data.json ds_type: json format: custom path: /workspace/input_data/f04259c91cb5f8b9_train_data.json type: field_instruction: input field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5604/767d570b-6fb9-412e-a0bb-613b3a65ea62 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/f04259c91cb5f8b9_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: aac7786a-015b-44a1-9c8e-ad88dd9f945c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: aac7786a-015b-44a1-9c8e-ad88dd9f945c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 767d570b-6fb9-412e-a0bb-613b3a65ea62 This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2072 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3285 | 0.0024 | 1 | 0.6434 | | 0.982 | 0.1206 | 50 | 0.2812 | | 0.7745 | 0.2413 | 100 | 0.2362 | | 1.2598 | 0.3619 | 150 | 0.2130 | | 0.7181 | 0.4825 | 200 | 0.2072 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/14B-Qwen2.5-Kunou-v1-Q8_0-GGUF
roleplaiapp
2025-01-29T08:54:04Z
5
0
transformers
[ "transformers", "gguf", "14b", "8-bit", "Q8_0", "kunou", "llama-cpp", "qwen25", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-29T08:53:06Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 8-bit - Q8_0 - gguf - kunou - llama-cpp - qwen25 - text-generation --- # roleplaiapp/14B-Qwen2.5-Kunou-v1-Q8_0-GGUF **Repo:** `roleplaiapp/14B-Qwen2.5-Kunou-v1-Q8_0-GGUF` **Original Model:** `14B-Qwen2.5-Kunou-v1` **Quantized File:** `14B-Qwen2.5-Kunou-v1.Q8_0.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q8_0` ## Overview This is a GGUF Q8_0 quantized version of 14B-Qwen2.5-Kunou-v1 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
memevis/p13
memevis
2025-01-29T08:53:55Z
18
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T08:48:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jvelja/pythia-finetune-gpt2-NoBSGC-lr_0.0005-NoModularity-RAVEL_MIXEDFixCluster
jvelja
2025-01-29T08:53:39Z
297
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T08:53:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
great0001/7907ae57-7cab-4253-a2e6-0693de140695
great0001
2025-01-29T08:50:20Z
9
0
peft
[ "peft", "safetensors", "starcoder2", "axolotl", "generated_from_trainer", "base_model:bigcode/starcoder2-3b", "base_model:adapter:bigcode/starcoder2-3b", "license:bigcode-openrail-m", "region:us" ]
null
2025-01-29T08:45:02Z
--- library_name: peft license: bigcode-openrail-m base_model: bigcode/starcoder2-3b tags: - axolotl - generated_from_trainer model-index: - name: 7907ae57-7cab-4253-a2e6-0693de140695 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: bigcode/starcoder2-3b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f65209fd2b79f576_train_data.json ds_type: json format: custom path: /workspace/input_data/f65209fd2b79f576_train_data.json type: field_instruction: text field_output: code format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/7907ae57-7cab-4253-a2e6-0693de140695 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/f65209fd2b79f576_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7fba0349-cbce-4a47-81c7-be27ce53fcc2 wandb_project: Mine-SN56-20-Gradients-On-Demand wandb_run: your_name wandb_runid: 7fba0349-cbce-4a47-81c7-be27ce53fcc2 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 7907ae57-7cab-4253-a2e6-0693de140695 This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 0.7522 | | 4.0906 | 0.0010 | 13 | 0.5578 | | 2.5267 | 0.0021 | 26 | 0.3773 | | 1.566 | 0.0031 | 39 | 0.3543 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/14B-Qwen2.5-Kunou-v1-Q2_K-GGUF
roleplaiapp
2025-01-29T08:50:02Z
5
0
transformers
[ "transformers", "gguf", "14b", "2-bit", "Q2_K", "kunou", "llama-cpp", "qwen25", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-29T08:49:38Z
--- library_name: transformers pipeline_tag: text-generation tags: - 14b - 2-bit - Q2_K - gguf - kunou - llama-cpp - qwen25 - text-generation --- # roleplaiapp/14B-Qwen2.5-Kunou-v1-Q2_K-GGUF **Repo:** `roleplaiapp/14B-Qwen2.5-Kunou-v1-Q2_K-GGUF` **Original Model:** `14B-Qwen2.5-Kunou-v1` **Quantized File:** `14B-Qwen2.5-Kunou-v1.Q2_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q2_K` ## Overview This is a GGUF Q2_K quantized version of 14B-Qwen2.5-Kunou-v1 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
TareksLab/Soubrette-LLaMa-70B
TareksLab
2025-01-29T08:49:53Z
13
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:Sao10K/70B-L3.3-Cirrus-x1", "base_model:merge:Sao10K/70B-L3.3-Cirrus-x1", "base_model:Sao10K/L3.3-70B-Euryale-v2.3", "base_model:merge:Sao10K/L3.3-70B-Euryale-v2.3", "base_model:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B", "base_model:Steelskull/L3.3-MS-Nevoria-70b", "base_model:merge:Steelskull/L3.3-MS-Nevoria-70b", "base_model:TheDrummer/Anubis-70B-v1", "base_model:merge:TheDrummer/Anubis-70B-v1", "license:llama3.3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T06:45:45Z
--- base_model: - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - TheDrummer/Anubis-70B-v1 - Sao10K/L3.3-70B-Euryale-v2.3 - SicariusSicariiStuff/Negative_LLAMA_70B - Sao10K/70B-L3.3-Cirrus-x1 - Steelskull/L3.3-MS-Nevoria-70b library_name: transformers tags: - mergekit - merge license: llama3.3 --- This is a bit of an experiment, trying to merge some good RP models together, which I will then combine with a smart model focused merge. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Steelskull/L3.3-MS-Nevoria-70b](https://huggingface.co/Steelskull/L3.3-MS-Nevoria-70b) as a base. ### Models Merged The following models were included in the merge: * [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1) * [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) * [Sao10K/L3.3-70B-Euryale-v2.3](https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3) * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [Sao10K/70B-L3.3-Cirrus-x1](https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: # Pivot model - model: SicariusSicariiStuff/Negative_LLAMA_70B # Target models - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - model: Sao10K/70B-L3.3-Cirrus-x1 - model: Sao10K/L3.3-70B-Euryale-v2.3 - model: TheDrummer/Anubis-70B-v1 merge_method: sce base_model: Steelskull/L3.3-MS-Nevoria-70b parameters: select_topk: 1.0 dtype: bfloat16 ```
TweedleDeepLearnings/f142a79a-3072-4233-86ea-7b3bdb944110
TweedleDeepLearnings
2025-01-29T08:49:40Z
243
0
peft
[ "peft", "safetensors", "axolotl", "generated_from_trainer", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "license:other", "region:us" ]
null
2025-01-29T06:24:46Z
--- library_name: peft license: other base_model: huggyllama/llama-7b tags: - axolotl - generated_from_trainer model-index: - name: c4b201cf-0eeb-4380-a91f-cd6329614a81 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora bf16: auto chat_template: llama3 dataset_prepared_path: null debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true gradient_clipping: 0.1 group_by_length: false hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-04 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: linear max_steps: 200 micro_batch_size: 128 mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 4096 special_tokens: pad_token: </PAD> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891 warmup_steps: 5 weight_decay: 0.1 xformers_attention: true ``` </details><br> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 128 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 2048 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
aleegis12/2c847405-1be0-4a7a-a847-03a8d1e6da02
aleegis12
2025-01-29T08:42:40Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:openlm-research/open_llama_3b", "base_model:adapter:openlm-research/open_llama_3b", "license:apache-2.0", "region:us" ]
null
2025-01-29T08:25:21Z
--- library_name: peft license: apache-2.0 base_model: openlm-research/open_llama_3b tags: - axolotl - generated_from_trainer model-index: - name: 2c847405-1be0-4a7a-a847-03a8d1e6da02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: openlm-research/open_llama_3b bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - 9cd0a27ec769d7cd_train_data.json ds_type: json format: custom path: /workspace/input_data/9cd0a27ec769d7cd_train_data.json type: field_input: input field_instruction: task field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: aleegis12/2c847405-1be0-4a7a-a847-03a8d1e6da02 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/9cd0a27ec769d7cd_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 6470a08d-ed3c-49de-9586-17f3c3506f49 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 6470a08d-ed3c-49de-9586-17f3c3506f49 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 2c847405-1be0-4a7a-a847-03a8d1e6da02 This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3013 | 0.0032 | 1 | 3.9001 | | 2.0088 | 0.1581 | 50 | 1.4521 | | 1.1618 | 0.3162 | 100 | 0.9727 | | 0.4218 | 0.4743 | 150 | 0.7259 | | 0.6491 | 0.6324 | 200 | 0.6887 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
datlaaaaaaa/b975874a-37c6-405d-a888-b518c45138af
datlaaaaaaa
2025-01-29T08:42:24Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:42:57Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO tags: - axolotl - generated_from_trainer model-index: - name: b975874a-37c6-405d-a888-b518c45138af results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f04259c91cb5f8b9_train_data.json ds_type: json format: custom path: /workspace/input_data/f04259c91cb5f8b9_train_data.json type: field_instruction: input field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: datlaaaaaaa/b975874a-37c6-405d-a888-b518c45138af hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/f04259c91cb5f8b9_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: aac7786a-015b-44a1-9c8e-ad88dd9f945c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: aac7786a-015b-44a1-9c8e-ad88dd9f945c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # b975874a-37c6-405d-a888-b518c45138af This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.463 | 0.1206 | 200 | 0.3422 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/179d6587-7bfd-4a59-8e49-7aba5c074f6f
mrferr3t
2025-01-29T08:42:19Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-13b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-13b-hf-flash", "region:us" ]
null
2025-01-29T08:23:04Z
--- library_name: peft base_model: NousResearch/CodeLlama-13b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: 179d6587-7bfd-4a59-8e49-7aba5c074f6f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-13b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3875808def965efa_train_data.json ds_type: json format: custom path: /workspace/input_data/3875808def965efa_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/179d6587-7bfd-4a59-8e49-7aba5c074f6f hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 21 micro_batch_size: 2 mlflow_experiment_name: /tmp/3875808def965efa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 29f091f6-5131-4ec0-8ff6-d9601393bcfa wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 29f091f6-5131-4ec0-8ff6-d9601393bcfa warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 179d6587-7bfd-4a59-8e49-7aba5c074f6f This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 21 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.548 | 0.0003 | 1 | 1.7422 | | 6.4993 | 0.0019 | 6 | 1.7238 | | 5.7646 | 0.0038 | 12 | 1.3907 | | 5.2891 | 0.0057 | 18 | 1.2329 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
nbninh/048f949e-3e7a-44b6-bd74-1dc0e13a14d9
nbninh
2025-01-29T08:40:56Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M", "base_model:adapter:unsloth/SmolLM-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:03:43Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M tags: - axolotl - generated_from_trainer model-index: - name: 048f949e-3e7a-44b6-bd74-1dc0e13a14d9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-360M bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ac004a2a3ec8e832_train_data.json ds_type: json format: custom path: /workspace/input_data/ac004a2a3ec8e832_train_data.json type: field_input: title field_instruction: content field_output: summary1 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nbninh/048f949e-3e7a-44b6-bd74-1dc0e13a14d9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ac004a2a3ec8e832_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 77344871-dc6c-43c2-89a7-28217f41b23c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 77344871-dc6c-43c2-89a7-28217f41b23c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 048f949e-3e7a-44b6-bd74-1dc0e13a14d9 This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8754 | 0.0027 | 200 | 1.9081 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
great0001/988761e1-228c-4de0-9319-e7a40fcec2df
great0001
2025-01-29T08:39:41Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-29T08:37:16Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 988761e1-228c-4de0-9319-e7a40fcec2df results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 226486ea217cc845_train_data.json ds_type: json format: custom path: /workspace/input_data/226486ea217cc845_train_data.json type: field_instruction: prompt field_output: caption format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/988761e1-228c-4de0-9319-e7a40fcec2df hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/226486ea217cc845_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d450f3db-bde7-42c0-80c7-58bdc98ab00b wandb_project: Mine-SN56-20-Gradients-On-Demand wandb_run: your_name wandb_runid: d450f3db-bde7-42c0-80c7-58bdc98ab00b warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 988761e1-228c-4de0-9319-e7a40fcec2df This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 2.2178 | | 1.9252 | 0.0025 | 13 | 1.5369 | | 1.5444 | 0.0049 | 26 | 1.4392 | | 1.4655 | 0.0074 | 39 | 1.3982 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kartikgupta373/i3-as15776-e608215-pink-yarrow
kartikgupta373
2025-01-29T08:37:11Z
7
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-29T08:37:09Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # I3 As15776 E608215 Pink Yarrow <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kartikgupta373/i3-as15776-e608215-pink-yarrow', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
slomkarafa/15-00
slomkarafa
2025-01-29T08:36:57Z
38
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-28T20:21:06Z
--- license: mit library_name: transformers --- # DeepSeek-R1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> <p align="center"> <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a> </p> ## 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.** <p align="center"> <img width="80%" src="figures/benchmark.jpg"> </p> ## 2. Model Summary --- **Post-Training: Large-Scale Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --- **Distillation: Smaller Models Can Be Powerful Too** - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. ## 3. Model Downloads ### DeepSeek-R1 Models <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) | </div> DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository. ### DeepSeek-R1-Distill Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | </div> DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. ## 4. Evaluation Results ### DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. <div align="center"> | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 | |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------| | | Architecture | - | - | MoE | - | - | MoE | | | # Activated Params | - | - | 37B | - | - | 37B | | | # Total Params | - | - | 671B | - | - | 671B | | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 | | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** | | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** | | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** | | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 | | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 | | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 | | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** | | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** | | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** | | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** | | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 | | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 | | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 | | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 | | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** | | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** | | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** | | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** | | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** | | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 | </div> ### Distilled Model Evaluation <div align="center"> | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 | | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 | | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 | | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 | | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 | | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 | </div> ## 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/) ## 6. How to Run Locally ### DeepSeek-R1 Models Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally. **NOTE: Hugging Face's Transformers has not been directly supported yet.** ### DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm): ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang) ```bash python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2 ``` ### Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. ## 7. License This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE). DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1. - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 8. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ## 9. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
lesso01/ed107b85-be2f-4133-99bd-6b7db78dfef3
lesso01
2025-01-29T08:36:56Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T08:28:11Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: ed107b85-be2f-4133-99bd-6b7db78dfef3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-0.5B-Instruct bf16: auto chat_template: llama3 datasets: - data_files: - ce7fcd2d05dffaef_train_data.json ds_type: json format: custom path: /workspace/input_data/ce7fcd2d05dffaef_train_data.json type: field_input: original_dataset field_instruction: original_question field_output: object_level_prompt format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso01/ed107b85-be2f-4133-99bd-6b7db78dfef3 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ce7fcd2d05dffaef_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 54f52b18-f019-41eb-b70b-23aa1dcdada5 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 54f52b18-f019-41eb-b70b-23aa1dcdada5 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # ed107b85-be2f-4133-99bd-6b7db78dfef3 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0016 | 0.6467 | 200 | 0.0026 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Man-tas/Coloring-Book-Flux-LoRA
Man-tas
2025-01-29T08:36:52Z
22
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-01-29T08:25:05Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: 'Coloring Book, A black and white drawing of a truck parked in front of a house. The truck is facing towards the right side of the image. There is a large tree to the right of the truck. There are small bushes to the left of the house. A fence is behind the truck on the right. The house has a roof that is made up of wood. The sky above the house is filled with fluffy white clouds.' output: url: images/EB1.png - text: 'Coloring Book, A black and white pencil sketch of a fox standing on its hind legs. The foxs fur is a light brown color, and its ears are a darker brown. Its eyes are black, and the foxs mouth is slightly open, as if it is about to go into the water. Thefoxs ears are sticking up, and it has a black nose and black eyes. There is a tree trunk on the left side of the image, and there are clouds in the sky. There are plants on the right and left of the fox.' output: url: images/EB2.png - text: 'Coloring Book, An eye-level view of a white sports car, the car is facing towards the right. The car is positioned in front of a backdrop of a cityscape, with the city skyline in the background. It is a black and white monochromatic image, with black accents on the cars body and the hood. The front of the car has a large headlight in the center of the front, and a black bumper with a yellow emblem on the right side of the headlight. On the left side, there are two large black tires with silver rims on them.' output: url: images/EB3.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: Coloring Book license: creativeml-openrail-m --- # Coloring-Book-Flux-LoRA <Gallery /> **The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.** ## Model description **prithivMLmods/Coloring-Book-Flux-LoRA** Image Processing Parameters | Parameter | Value | Parameter | Value | |---------------------------|--------|---------------------------|--------| | LR Scheduler | constant | Noise Offset | 0.03 | | Optimizer | AdamW | Multires Noise Discount | 0.1 | | Network Dim | 64 | Multires Noise Iterations | 10 | | Network Alpha | 32 | Repeat & Steps | 20 & 2000| | Epoch | 10 | Save Every N Epochs | 1 | Labeling: florence2-en(natural language & English) Total Images Used for Training : 10 [ Hi-RES ] ## Best Dimensions - 1024 x 1024 (Default) ## Setting Up ``` import torch from pipelines import DiffusionPipeline base_model = "black-forest-labs/FLUX.1-dev" pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16) lora_repo = "prithivMLmods/Coloring-Book-Flux-LoRA" trigger_word = "Coloring Book" pipe.load_lora_weights(lora_repo) device = torch.device("cuda") pipe.to(device) ``` ## Data source - https://playground.com/ ## Trigger words You should use `Coloring Book` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/prithivMLmods/Coloring-Book-Flux-LoRA/tree/main) them in the Files & versions tab.
kk-aivio/5d2e7f18-004f-4e8b-9e9c-4701ff23dd14
kk-aivio
2025-01-29T08:36:46Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:trl-internal-testing/tiny-random-LlamaForCausalLM", "base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-01-29T08:36:14Z
--- library_name: peft base_model: trl-internal-testing/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 5d2e7f18-004f-4e8b-9e9c-4701ff23dd14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: trl-internal-testing/tiny-random-LlamaForCausalLM bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a778387021162f56_train_data.json ds_type: json format: custom path: /workspace/input_data/a778387021162f56_train_data.json type: field_instruction: prompt field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kk-aivio/5d2e7f18-004f-4e8b-9e9c-4701ff23dd14 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/a778387021162f56_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 wandb_project: Birthday-SN56-17-Gradients-On-Demand wandb_run: your_name wandb_runid: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5d2e7f18-004f-4e8b-9e9c-4701ff23dd14 This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 10.3805 | | 10.3805 | 0.0055 | 13 | 10.3797 | | 10.3789 | 0.0111 | 26 | 10.3787 | | 10.3782 | 0.0166 | 39 | 10.3783 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
thirdeyeai/elevate360m
thirdeyeai
2025-01-29T08:36:45Z
42
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-01-28T23:34:38Z
--- metrics: - accuracy - code_eval --- # Model Card for Evaluate360M ## Model Details ### Model Description Evaluate360M is a lightweight large language model optimized for reasoning tasks. It is designed to run efficiently on low-end commercial hardware, such as mobile phones, while maintaining strong performance in logical reasoning and general-purpose applications. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** Transformer-based decoder model - **Language(s) (NLP):** English - **License:** [More Information Needed] - **Finetuned from model [optional]:** `HuggingFaceTB/SmolLM2-360M-Instruct` ### Model Sources - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ### Direct Use Evaluate360M is intended for general-purpose reasoning tasks and can be used in applications that require lightweight LLMs, such as: - Mobile-based AI assistants - Low-power embedded systems - Edge computing applications ### Downstream Use It can be further fine-tuned for specific domains, including code generation, summarization, or dialogue systems. ### Out-of-Scope Use - Not optimized for handling very large context windows - Not designed for generating high-fidelity creative text, such as poetry or fiction ## Bias, Risks, and Limitations ### Limitations - Struggles with handling large context windows. - Not evaluated for potential biases yet. ### Recommendations Users should be aware of the model’s limitations in context length and should evaluate its performance for their specific use cases. ## How to Get Started with the Model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "evaluate360m" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) inputs = tokenizer("What is the capital of France?", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` ## Training Details ### Training Data - **Dataset:** `HuggingFaceH4/Bespoke-Stratos-17k` - **Preprocessing:** Token packing enabled (`--packing`), sequence length up to 2048 tokens ### Training Procedure - **Optimizer & Precision:** - `bf16` mixed precision - `gradient_accumulation_steps = 8` - Gradient checkpointing enabled - **Hyperparameters:** - Learning rate: `2e-5` - Epochs: `3` - Batch size: `4` (per device, both training and evaluation) - **Evaluation & Saving:** - Evaluation every `500` steps - Model checkpoint saved every `1000` steps, keeping a max of `2` checkpoints ### Compute Infrastructure - **Hardware Used:** A100 GPU - **Training Time:** 6 hours ## Evaluation - **Benchmarks:** No evaluation conducted yet. - **Metrics:** Not available yet. ## Environmental Impact - **Hardware Type:** A100 GPU - **Hours Used:** 6 hours - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications ### Model Architecture - Similar to SmolLM2-360M - Inspired by MobileLLM - Uses **Grouped-Query Attention (GQA)** - Prioritizes depth over width ## Citation [optional] **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## More Information [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
great0001/98aeeeed-f492-4f8e-8b06-ad48e703e4fc
great0001
2025-01-29T08:35:33Z
7
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-7b-it", "base_model:adapter:unsloth/gemma-7b-it", "license:apache-2.0", "region:us" ]
null
2025-01-29T07:34:26Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: 98aeeeed-f492-4f8e-8b06-ad48e703e4fc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-7b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5a1549a363bd92b9_train_data.json ds_type: json format: custom path: /workspace/input_data/5a1549a363bd92b9_train_data.json type: field_input: system_prompt field_instruction: question field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/98aeeeed-f492-4f8e-8b06-ad48e703e4fc hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/5a1549a363bd92b9_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9d3bed81-78f2-4061-9ad2-a87e632c5343 wandb_project: Mine-SN56-20-Gradients-On-Demand wandb_run: your_name wandb_runid: 9d3bed81-78f2-4061-9ad2-a87e632c5343 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 98aeeeed-f492-4f8e-8b06-ad48e703e4fc This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0491 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 3.3065 | | 1.7599 | 0.0001 | 13 | 1.2474 | | 1.3996 | 0.0002 | 26 | 1.1088 | | 1.1721 | 0.0003 | 39 | 1.0491 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kartikgupta373/e10-ad15580-705536-aqua-green
kartikgupta373
2025-01-29T08:35:23Z
6
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-29T08:35:21Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # E10 Ad15580 705536 Aqua Green <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kartikgupta373/e10-ad15580-705536-aqua-green', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
kartikgupta373/e8-ad15653-705553-white
kartikgupta373
2025-01-29T08:35:13Z
6
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-29T08:35:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # E8 Ad15653 705553 White <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kartikgupta373/e8-ad15653-705553-white', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
John6666/meichi-il-ight-mix-v1-meichiilustmixv1-sdxl
John6666
2025-01-29T08:34:49Z
41
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "woman", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-01-29T08:27:14Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - woman - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1193368/meichi-il-ightmixv1?modelVersionId=1343666). This model created by [JuzuArupukato](https://civitai.com/user/JuzuArupukato).
lesso08/2d6c8b30-2a28-46e6-a8af-dba39f33ad6d
lesso08
2025-01-29T08:34:06Z
6
0
peft
[ "peft", "safetensors", "gpt_neo", "axolotl", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-1.3B", "base_model:adapter:EleutherAI/gpt-neo-1.3B", "license:mit", "region:us" ]
null
2025-01-29T08:32:09Z
--- library_name: peft license: mit base_model: EleutherAI/gpt-neo-1.3B tags: - axolotl - generated_from_trainer model-index: - name: 2d6c8b30-2a28-46e6-a8af-dba39f33ad6d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/gpt-neo-1.3B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ae14bfeb00663848_train_data.json ds_type: json format: custom path: /workspace/input_data/ae14bfeb00663848_train_data.json type: field_input: product_title field_instruction: text field_output: preds format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso08/2d6c8b30-2a28-46e6-a8af-dba39f33ad6d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/ae14bfeb00663848_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 768b7d8f-e163-4eb7-94e2-6e5f62199e26 wandb_project: multi wandb_run: your_name wandb_runid: 768b7d8f-e163-4eb7-94e2-6e5f62199e26 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 2d6c8b30-2a28-46e6-a8af-dba39f33ad6d This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 52 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.5501 | 0.9903 | 51 | 1.4190 | | 5.5417 | 1.0097 | 52 | 1.4196 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
djphoenix/Qwen2.5-Coder-3B-Instruct-Q6-mlx
djphoenix
2025-01-29T08:33:37Z
15
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "code", "codeqwen", "chat", "qwen", "qwen-coder", "mlx", "mlx-my-repo", "conversational", "en", "base_model:Qwen/Qwen2.5-Coder-3B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "region:us" ]
text-generation
2025-01-29T08:33:11Z
--- license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE language: - en base_model: Qwen/Qwen2.5-Coder-3B-Instruct pipeline_tag: text-generation library_name: transformers tags: - code - codeqwen - chat - qwen - qwen-coder - mlx - mlx-my-repo --- # djphoenix/Qwen2.5-Coder-3B-Instruct-Q6-mlx The Model [djphoenix/Qwen2.5-Coder-3B-Instruct-Q6-mlx](https://huggingface.co/djphoenix/Qwen2.5-Coder-3B-Instruct-Q6-mlx) was converted to MLX format from [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) using mlx-lm version **0.20.5**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("djphoenix/Qwen2.5-Coder-3B-Instruct-Q6-mlx") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
kartikgupta373/e11-ad15514-705404-pink
kartikgupta373
2025-01-29T08:33:32Z
8
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-29T08:33:29Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # E11 Ad15514 705404 Pink <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kartikgupta373/e11-ad15514-705404-pink', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
lesso11/015006f4-5f6e-42f5-be03-eb13d6cf97b5
lesso11
2025-01-29T08:33:18Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-29T08:31:47Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 015006f4-5f6e-42f5-be03-eb13d6cf97b5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-0.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ce7fcd2d05dffaef_train_data.json ds_type: json format: custom path: /workspace/input_data/ce7fcd2d05dffaef_train_data.json type: field_input: original_dataset field_instruction: original_question field_output: object_level_prompt format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso11/015006f4-5f6e-42f5-be03-eb13d6cf97b5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/ce7fcd2d05dffaef_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 54f52b18-f019-41eb-b70b-23aa1dcdada5 wandb_project: multi wandb_run: your_name wandb_runid: 54f52b18-f019-41eb-b70b-23aa1dcdada5 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 015006f4-5f6e-42f5-be03-eb13d6cf97b5 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 39 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2458 | 0.9806 | 38 | 0.2641 | | 0.3732 | 1.0129 | 39 | 0.2641 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Triangle104/MN-12B-Mimicore-WhiteSnake-Q8_0-GGUF
Triangle104
2025-01-29T08:32:55Z
22
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:DoppelReflEx/MN-12B-Mimicore-WhiteSnake", "base_model:quantized:DoppelReflEx/MN-12B-Mimicore-WhiteSnake", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2025-01-29T08:24:10Z
--- license: cc-by-nc-4.0 base_model: DoppelReflEx/MN-12B-Mimicore-WhiteSnake library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Triangle104/MN-12B-Mimicore-WhiteSnake-Q8_0-GGUF This model was converted to GGUF format from [`DoppelReflEx/MN-12B-Mimicore-WhiteSnake`](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) for more details on the model. --- Model details: - Better version of GreenSnake, not too much different in OpenLLM LeaderBoard scores. Merge with cgato/Nemo-12b-Humanize-KTO-Experimental-Latest so this model could perform 'human response'. This merge model is a gift for Lunar New Year, haha. Enjoy it. Good for RP, ERP, Story Telling. PS: It's don't have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest Tokenization issue. Update: Still have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest Tokenization issue, but randomly occur in rare rate. If you are experiencing this issue, just press re-generate to reroll other message/response. Chat Format? ChatML of course! Models Merged The following models were included in the merge: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest DoppelReflEx/MN-12B-Mimicore-GreenSnake Configuration The following YAML configuration was used to produce this model: models: - model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest parameters: density: 0.9 weight: 1 - model: DoppelReflEx/MN-12B-Mimicore-GreenSnake parameters: density: 0.6 weight: 0.8 merge_method: dare_ties base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml tokenizer_source: base --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q8_0-GGUF --hf-file mn-12b-mimicore-whitesnake-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q8_0-GGUF --hf-file mn-12b-mimicore-whitesnake-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q8_0-GGUF --hf-file mn-12b-mimicore-whitesnake-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q8_0-GGUF --hf-file mn-12b-mimicore-whitesnake-q8_0.gguf -c 2048 ```
lesso02/3badf6d3-b818-41fe-a1c0-0f17c46c6a8d
lesso02
2025-01-29T08:31:52Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-29T08:30:42Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 3badf6d3-b818-41fe-a1c0-0f17c46c6a8d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-0.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ce7fcd2d05dffaef_train_data.json ds_type: json format: custom path: /workspace/input_data/ce7fcd2d05dffaef_train_data.json type: field_input: original_dataset field_instruction: original_question field_output: object_level_prompt format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso02/3badf6d3-b818-41fe-a1c0-0f17c46c6a8d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/ce7fcd2d05dffaef_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 54f52b18-f019-41eb-b70b-23aa1dcdada5 wandb_project: multi wandb_run: your_name wandb_runid: 54f52b18-f019-41eb-b70b-23aa1dcdada5 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 3badf6d3-b818-41fe-a1c0-0f17c46c6a8d This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 39 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2398 | 0.9806 | 38 | 0.2581 | | 0.3631 | 1.0129 | 39 | 0.2577 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Triangle104/MN-12B-Mimicore-WhiteSnake-Q6_K-GGUF
Triangle104
2025-01-29T08:31:25Z
25
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:DoppelReflEx/MN-12B-Mimicore-WhiteSnake", "base_model:quantized:DoppelReflEx/MN-12B-Mimicore-WhiteSnake", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-29T08:20:11Z
--- license: cc-by-nc-4.0 base_model: DoppelReflEx/MN-12B-Mimicore-WhiteSnake library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Triangle104/MN-12B-Mimicore-WhiteSnake-Q6_K-GGUF This model was converted to GGUF format from [`DoppelReflEx/MN-12B-Mimicore-WhiteSnake`](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) for more details on the model. --- Model details: - Better version of GreenSnake, not too much different in OpenLLM LeaderBoard scores. Merge with cgato/Nemo-12b-Humanize-KTO-Experimental-Latest so this model could perform 'human response'. This merge model is a gift for Lunar New Year, haha. Enjoy it. Good for RP, ERP, Story Telling. PS: It's don't have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest Tokenization issue. Update: Still have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest Tokenization issue, but randomly occur in rare rate. If you are experiencing this issue, just press re-generate to reroll other message/response. Chat Format? ChatML of course! Models Merged The following models were included in the merge: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest DoppelReflEx/MN-12B-Mimicore-GreenSnake Configuration The following YAML configuration was used to produce this model: models: - model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest parameters: density: 0.9 weight: 1 - model: DoppelReflEx/MN-12B-Mimicore-GreenSnake parameters: density: 0.6 weight: 0.8 merge_method: dare_ties base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml tokenizer_source: base --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q6_K-GGUF --hf-file mn-12b-mimicore-whitesnake-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q6_K-GGUF --hf-file mn-12b-mimicore-whitesnake-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q6_K-GGUF --hf-file mn-12b-mimicore-whitesnake-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q6_K-GGUF --hf-file mn-12b-mimicore-whitesnake-q6_k.gguf -c 2048 ```
nhunglaaaaaaa/9853282e-7de5-49e3-a0e7-380ec4f19e19
nhunglaaaaaaa
2025-01-29T08:31:22Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-13b-v1.5", "base_model:adapter:lmsys/vicuna-13b-v1.5", "license:llama2", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T08:05:36Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-13b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: 9853282e-7de5-49e3-a0e7-380ec4f19e19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-13b-v1.5 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 050404ebdd7019b8_train_data.json ds_type: json format: custom path: /workspace/input_data/050404ebdd7019b8_train_data.json type: field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhunglaaaaaaa/9853282e-7de5-49e3-a0e7-380ec4f19e19 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/050404ebdd7019b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f98aeb00-1c68-46fd-a249-d65fd262ecb9 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f98aeb00-1c68-46fd-a249-d65fd262ecb9 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 9853282e-7de5-49e3-a0e7-380ec4f19e19 This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5398 | 0.1369 | 200 | 0.8065 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
peterreyes22/pedro
peterreyes22
2025-01-29T08:31:13Z
40
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-29T08:00:33Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: pedro --- # Pedro <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `pedro` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('peterreyes22/pedro', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Triangle104/MN-12B-Mimicore-WhiteSnake-Q5_K_M-GGUF
Triangle104
2025-01-29T08:30:08Z
371
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:DoppelReflEx/MN-12B-Mimicore-WhiteSnake", "base_model:quantized:DoppelReflEx/MN-12B-Mimicore-WhiteSnake", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-29T08:08:04Z
--- license: cc-by-nc-4.0 base_model: DoppelReflEx/MN-12B-Mimicore-WhiteSnake library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Triangle104/MN-12B-Mimicore-WhiteSnake-Q5_K_M-GGUF This model was converted to GGUF format from [`DoppelReflEx/MN-12B-Mimicore-WhiteSnake`](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) for more details on the model. --- Model details: - Better version of GreenSnake, not too much different in OpenLLM LeaderBoard scores. Merge with cgato/Nemo-12b-Humanize-KTO-Experimental-Latest so this model could perform 'human response'. This merge model is a gift for Lunar New Year, haha. Enjoy it. Good for RP, ERP, Story Telling. PS: It's don't have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest Tokenization issue. Update: Still have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest Tokenization issue, but randomly occur in rare rate. If you are experiencing this issue, just press re-generate to reroll other message/response. Chat Format? ChatML of course! Models Merged The following models were included in the merge: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest DoppelReflEx/MN-12B-Mimicore-GreenSnake Configuration The following YAML configuration was used to produce this model: models: - model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest parameters: density: 0.9 weight: 1 - model: DoppelReflEx/MN-12B-Mimicore-GreenSnake parameters: density: 0.6 weight: 0.8 merge_method: dare_ties base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml tokenizer_source: base --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q5_K_M-GGUF --hf-file mn-12b-mimicore-whitesnake-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q5_K_M-GGUF --hf-file mn-12b-mimicore-whitesnake-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q5_K_M-GGUF --hf-file mn-12b-mimicore-whitesnake-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q5_K_M-GGUF --hf-file mn-12b-mimicore-whitesnake-q5_k_m.gguf -c 2048 ```
Best000/678fc2c7-6740-4135-ac45-3ce1bc5fcd25
Best000
2025-01-29T08:30:06Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-29T08:29:14Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 678fc2c7-6740-4135-ac45-3ce1bc5fcd25 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-0.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ce7fcd2d05dffaef_train_data.json ds_type: json format: custom path: /workspace/input_data/ce7fcd2d05dffaef_train_data.json type: field_input: original_dataset field_instruction: original_question field_output: object_level_prompt format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/678fc2c7-6740-4135-ac45-3ce1bc5fcd25 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ce7fcd2d05dffaef_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 54f52b18-f019-41eb-b70b-23aa1dcdada5 wandb_project: Birthday-SN56-32-Gradients-On-Demand wandb_run: your_name wandb_runid: 54f52b18-f019-41eb-b70b-23aa1dcdada5 warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 678fc2c7-6740-4135-ac45-3ce1bc5fcd25 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0032 | 1 | 1.3812 | | 1.0269 | 0.0420 | 13 | 0.8900 | | 0.6884 | 0.0841 | 26 | 0.2029 | | 0.2652 | 0.1261 | 39 | 0.0369 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
AlfRjw/Confucius-o1-14B-Q3-mlx
AlfRjw
2025-01-29T08:29:46Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "mlx", "mlx-my-repo", "conversational", "en", "base_model:netease-youdao/Confucius-o1-14B", "base_model:quantized:netease-youdao/Confucius-o1-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "region:us" ]
text-generation
2025-01-29T08:12:13Z
--- license: apache-2.0 language: - en base_model: netease-youdao/Confucius-o1-14B tags: - chat - mlx - mlx-my-repo library_name: transformers --- # AlfRjw/Confucius-o1-14B-Q3-mlx **UNTESTED** The Model [AlfRjw/Confucius-o1-14B-Q3-mlx](https://huggingface.co/AlfRjw/Confucius-o1-14B-Q3-mlx) was converted to MLX format from [netease-youdao/Confucius-o1-14B](https://huggingface.co/netease-youdao/Confucius-o1-14B) using mlx-lm version **0.20.5**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("AlfRjw/Confucius-o1-14B-Q3-mlx") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Romain-XV/270059d3-dc6f-4807-b707-a78859639687
Romain-XV
2025-01-29T08:29:19Z
7
0
peft
[ "peft", "safetensors", "gpt_neo", "axolotl", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-1.3B", "base_model:adapter:EleutherAI/gpt-neo-1.3B", "license:mit", "region:us" ]
null
2025-01-29T08:21:34Z
--- library_name: peft license: mit base_model: EleutherAI/gpt-neo-1.3B tags: - axolotl - generated_from_trainer model-index: - name: 270059d3-dc6f-4807-b707-a78859639687 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/gpt-neo-1.3B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ae14bfeb00663848_train_data.json ds_type: json format: custom path: /workspace/input_data/ae14bfeb00663848_train_data.json type: field_input: product_title field_instruction: text field_output: preds format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: Romain-XV/270059d3-dc6f-4807-b707-a78859639687 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: true lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - k_proj lr_scheduler: cosine max_steps: 700 micro_batch_size: 4 mlflow_experiment_name: /tmp/ae14bfeb00663848_train_data.json model_type: AutoModelForCausalLM optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 768b7d8f-e163-4eb7-94e2-6e5f62199e26 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 768b7d8f-e163-4eb7-94e2-6e5f62199e26 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 270059d3-dc6f-4807-b707-a78859639687 This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8566 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 52 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 31.4181 | 0.0195 | 1 | 1.9560 | | 13.4716 | 0.9732 | 50 | 0.8566 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/oh-dcft-v3.1-claude-3-5-haiku-20241022-Q3_K_L-GGUF
roleplaiapp
2025-01-29T08:28:49Z
442
0
transformers
[ "transformers", "gguf", "3-bit", "Q3_K_L", "claude", "dcft", "haiku", "llama-cpp", "text-generation", "v31", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-29T08:28:29Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - Q3_K_L - claude - dcft - gguf - haiku - llama-cpp - text-generation - v31 --- # roleplaiapp/oh-dcft-v3.1-claude-3-5-haiku-20241022-Q3_K_L-GGUF **Repo:** `roleplaiapp/oh-dcft-v3.1-claude-3-5-haiku-20241022-Q3_K_L-GGUF` **Original Model:** `oh-dcft-v3.1-claude-3-5-haiku-20241022` **Quantized File:** `oh-dcft-v3.1-claude-3-5-haiku-20241022.Q3_K_L.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_L` ## Overview This is a GGUF Q3_K_L quantized version of oh-dcft-v3.1-claude-3-5-haiku-20241022 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
lesso02/5d5ab3bc-28f0-472c-8c2e-14fceb55844e
lesso02
2025-01-29T08:28:26Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-13b-v1.5", "base_model:adapter:lmsys/vicuna-13b-v1.5", "license:llama2", "region:us" ]
null
2025-01-29T08:06:23Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-13b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: 5d5ab3bc-28f0-472c-8c2e-14fceb55844e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-13b-v1.5 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 050404ebdd7019b8_train_data.json ds_type: json format: custom path: /workspace/input_data/050404ebdd7019b8_train_data.json type: field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso02/5d5ab3bc-28f0-472c-8c2e-14fceb55844e hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/050404ebdd7019b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f98aeb00-1c68-46fd-a249-d65fd262ecb9 wandb_project: multi wandb_run: your_name wandb_runid: f98aeb00-1c68-46fd-a249-d65fd262ecb9 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 5d5ab3bc-28f0-472c-8c2e-14fceb55844e This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 183 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.704 | 0.9959 | 182 | 0.7756 | | 1.2746 | 1.0041 | 183 | 0.7757 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Theros/Qwen2.5-ColdBrew-R1-test5-Q4_K_M-GGUF
Theros
2025-01-29T08:28:25Z
20
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Theros/Qwen2.5-ColdBrew-R1-test5", "base_model:quantized:Theros/Qwen2.5-ColdBrew-R1-test5", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-29T08:28:03Z
--- base_model: Theros/Qwen2.5-ColdBrew-R1-test5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Theros/Qwen2.5-ColdBrew-R1-test5-Q4_K_M-GGUF This model was converted to GGUF format from [`Theros/Qwen2.5-ColdBrew-R1-test5`](https://huggingface.co/Theros/Qwen2.5-ColdBrew-R1-test5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Theros/Qwen2.5-ColdBrew-R1-test5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Theros/Qwen2.5-ColdBrew-R1-test5-Q4_K_M-GGUF --hf-file qwen2.5-coldbrew-r1-test5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Theros/Qwen2.5-ColdBrew-R1-test5-Q4_K_M-GGUF --hf-file qwen2.5-coldbrew-r1-test5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Theros/Qwen2.5-ColdBrew-R1-test5-Q4_K_M-GGUF --hf-file qwen2.5-coldbrew-r1-test5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Theros/Qwen2.5-ColdBrew-R1-test5-Q4_K_M-GGUF --hf-file qwen2.5-coldbrew-r1-test5-q4_k_m.gguf -c 2048 ```
facu1321/jeclem
facu1321
2025-01-29T08:27:21Z
57
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-29T08:11:07Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: jeclem --- # Jeclem <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `jeclem` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('facu1321/jeclem', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
gavrilstep/f9b9d6ac-cb24-488b-85b3-046a596974e3
gavrilstep
2025-01-29T08:27:18Z
6
0
peft
[ "peft", "safetensors", "mixtral", "axolotl", "generated_from_trainer", "base_model:Eurdem/Defne_llama3_2x8B", "base_model:adapter:Eurdem/Defne_llama3_2x8B", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T08:19:45Z
--- library_name: peft license: llama3 base_model: Eurdem/Defne_llama3_2x8B tags: - axolotl - generated_from_trainer model-index: - name: f9b9d6ac-cb24-488b-85b3-046a596974e3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Eurdem/Defne_llama3_2x8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 28991e780ad8e25e_train_data.json ds_type: json format: custom path: /workspace/input_data/28991e780ad8e25e_train_data.json type: field_input: question field_instruction: prompt field_output: rejected format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: gavrilstep/f9b9d6ac-cb24-488b-85b3-046a596974e3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 30 micro_batch_size: 2 mlflow_experiment_name: /tmp/28991e780ad8e25e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 85a59a0d-822d-4003-8bf0-c43fdd5abff5 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 85a59a0d-822d-4003-8bf0-c43fdd5abff5 warmup_steps: 10 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f9b9d6ac-cb24-488b-85b3-046a596974e3 This model is a fine-tuned version of [Eurdem/Defne_llama3_2x8B](https://huggingface.co/Eurdem/Defne_llama3_2x8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0013 | 1 | nan | | 0.0 | 0.0063 | 5 | nan | | 0.0 | 0.0126 | 10 | nan | | 0.0 | 0.0190 | 15 | nan | | 0.0 | 0.0253 | 20 | nan | | 0.0 | 0.0316 | 25 | nan | | 0.0 | 0.0379 | 30 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
John6666/jaim-just-another-illustrious-merge-v3-sdxl
John6666
2025-01-29T08:27:12Z
49
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "realistic", "2.5D", "illustrious", "en", "base_model:Laxhar/noobai-XL-1.1", "base_model:finetune:Laxhar/noobai-XL-1.1", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-01-29T08:19:50Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - realistic - 2.5D - illustrious base_model: Laxhar/noobai-XL-1.1 --- Original model is [here](https://civitai.com/models/1165105/jaim-just-another-illustrious-merge?modelVersionId=1344566). This model created by [infamous__fish](https://civitai.com/user/infamous__fish).
minhnguyennnnnn/89adb879-676c-4dd2-b417-dcd2a5888f00
minhnguyennnnnn
2025-01-29T08:26:44Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:43:13Z
--- library_name: peft license: apache-2.0 base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO tags: - axolotl - generated_from_trainer model-index: - name: 89adb879-676c-4dd2-b417-dcd2a5888f00 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f04259c91cb5f8b9_train_data.json ds_type: json format: custom path: /workspace/input_data/f04259c91cb5f8b9_train_data.json type: field_instruction: input field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: minhnguyennnnnn/89adb879-676c-4dd2-b417-dcd2a5888f00 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/f04259c91cb5f8b9_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: aac7786a-015b-44a1-9c8e-ad88dd9f945c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: aac7786a-015b-44a1-9c8e-ad88dd9f945c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 89adb879-676c-4dd2-b417-dcd2a5888f00 This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4697 | 0.1206 | 200 | 0.3434 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nathanialhunt/2ef0e3dc-b733-4ff5-b582-2a11508b4240
nathanialhunt
2025-01-29T08:26:31Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:trl-internal-testing/tiny-random-LlamaForCausalLM", "base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-01-29T08:26:03Z
--- library_name: peft base_model: trl-internal-testing/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 2ef0e3dc-b733-4ff5-b582-2a11508b4240 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: trl-internal-testing/tiny-random-LlamaForCausalLM bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a778387021162f56_train_data.json ds_type: json format: custom path: /workspace/input_data/a778387021162f56_train_data.json type: field_instruction: prompt field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: nathanialhunt/2ef0e3dc-b733-4ff5-b582-2a11508b4240 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/a778387021162f56_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 wandb_project: Birthday-SN56-24-Gradients-On-Demand wandb_run: your_name wandb_runid: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 2ef0e3dc-b733-4ff5-b582-2a11508b4240 This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3781 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 10.3805 | | 10.3805 | 0.0055 | 13 | 10.3796 | | 10.3789 | 0.0111 | 26 | 10.3786 | | 10.3781 | 0.0166 | 39 | 10.3781 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shibajustfor/01f140c9-4075-4473-b3ba-c50188677cdd
shibajustfor
2025-01-29T08:25:55Z
11
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:trl-internal-testing/tiny-random-LlamaForCausalLM", "base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-01-29T08:25:22Z
--- library_name: peft base_model: trl-internal-testing/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 01f140c9-4075-4473-b3ba-c50188677cdd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: trl-internal-testing/tiny-random-LlamaForCausalLM bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a778387021162f56_train_data.json ds_type: json format: custom path: /workspace/input_data/a778387021162f56_train_data.json type: field_instruction: prompt field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: shibajustfor/01f140c9-4075-4473-b3ba-c50188677cdd hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/a778387021162f56_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 wandb_project: Birthday-SN56-11-Gradients-On-Demand wandb_run: your_name wandb_runid: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 01f140c9-4075-4473-b3ba-c50188677cdd This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 10.3805 | | 10.3805 | 0.0055 | 13 | 10.3795 | | 10.3788 | 0.0111 | 26 | 10.3785 | | 10.378 | 0.0166 | 39 | 10.3779 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrferr3t/969ccec4-3edd-40b1-a042-49cb9497c635
mrferr3t
2025-01-29T08:25:30Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:trl-internal-testing/tiny-random-LlamaForCausalLM", "base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-01-29T08:25:00Z
--- library_name: peft base_model: trl-internal-testing/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 969ccec4-3edd-40b1-a042-49cb9497c635 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: trl-internal-testing/tiny-random-LlamaForCausalLM bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a778387021162f56_train_data.json ds_type: json format: custom path: /workspace/input_data/a778387021162f56_train_data.json type: field_instruction: prompt field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/969ccec4-3edd-40b1-a042-49cb9497c635 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 16 micro_batch_size: 2 mlflow_experiment_name: /tmp/a778387021162f56_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 969ccec4-3edd-40b1-a042-49cb9497c635 This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3781 | 0.0004 | 1 | 10.3805 | | 10.3826 | 0.0017 | 4 | 10.3804 | | 10.3824 | 0.0034 | 8 | 10.3802 | | 10.38 | 0.0051 | 12 | 10.3799 | | 10.3721 | 0.0068 | 16 | 10.3798 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
robiual-awal/bda72465-f137-4c28-a050-eddde9c35f31
robiual-awal
2025-01-29T08:25:07Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:trl-internal-testing/tiny-random-LlamaForCausalLM", "base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM", "region:us" ]
null
2025-01-29T08:24:39Z
--- library_name: peft base_model: trl-internal-testing/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: bda72465-f137-4c28-a050-eddde9c35f31 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: trl-internal-testing/tiny-random-LlamaForCausalLM bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a778387021162f56_train_data.json ds_type: json format: custom path: /workspace/input_data/a778387021162f56_train_data.json type: field_instruction: prompt field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: robiual-awal/bda72465-f137-4c28-a050-eddde9c35f31 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/a778387021162f56_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 wandb_project: Birthday-SN56-29-Gradients-On-Demand wandb_run: your_name wandb_runid: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bda72465-f137-4c28-a050-eddde9c35f31 This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3781 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 10.3805 | | 10.3805 | 0.0055 | 13 | 10.3796 | | 10.3789 | 0.0111 | 26 | 10.3787 | | 10.3781 | 0.0166 | 39 | 10.3781 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
tarabukinivan/8cf9565a-2d11-46cc-b260-04f0c4f5a64d
tarabukinivan
2025-01-29T08:25:01Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:trl-internal-testing/tiny-random-LlamaForCausalLM", "base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM", "4-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T08:24:25Z
--- library_name: peft base_model: trl-internal-testing/tiny-random-LlamaForCausalLM tags: - axolotl - generated_from_trainer model-index: - name: 8cf9565a-2d11-46cc-b260-04f0c4f5a64d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: trl-internal-testing/tiny-random-LlamaForCausalLM bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a778387021162f56_train_data.json ds_type: json format: custom path: /workspace/input_data/a778387021162f56_train_data.json type: field_instruction: prompt field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: tarabukinivan/8cf9565a-2d11-46cc-b260-04f0c4f5a64d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 30 micro_batch_size: 2 mlflow_experiment_name: /tmp/a778387021162f56_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 15 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: d8ce5f4e-1d08-4104-98b0-755a57abc1d7 warmup_steps: 15 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 8cf9565a-2d11-46cc-b260-04f0c4f5a64d This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 15 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 10.3817 | | 10.3807 | 0.0021 | 5 | 10.3816 | | 10.3823 | 0.0043 | 10 | 10.3812 | | 10.3794 | 0.0064 | 15 | 10.3806 | | 10.3784 | 0.0085 | 20 | 10.3798 | | 10.3776 | 0.0106 | 25 | 10.3792 | | 10.379 | 0.0128 | 30 | 10.3791 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Athspi/athspi-llm
Athspi
2025-01-29T08:24:02Z
75
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "storytelling", "fiction", "tiny-stories", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-21T09:45:55Z
--- license: apache-2.0 tags: - generated_from_trainer - storytelling - fiction - tiny-stories pipeline_tag: text-generation library_name: transformers --- # Athspi LLM 🧠 A small but capable language model for creative story generation, trained on the TinyStories dataset. ![Athspi Banner](https://example.com/banner.jpg) <!-- Add your banner image URL --> ## Model Details ### Architecture - **Model Type**: Transformer-based language model - **Layers**: 4 - **Embedding Dim**: 384 - **Heads**: 6 - **Sequence Length**: 128 tokens - **Parameters**: ~28M ### Training Data - **Dataset**: [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) - **Training Coverage**: 5% of dataset (~100k samples) ## Usage ### Installation ```bash pip install torch transformers sentencepiece
trangtrannnnn/469ecbb5-810f-4a55-8290-cd005a7ce037
trangtrannnnn
2025-01-29T08:23:51Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:58:08Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 469ecbb5-810f-4a55-8290-cd005a7ce037 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3e822cd8df57cb11_train_data.json ds_type: json format: custom path: /workspace/input_data/3e822cd8df57cb11_train_data.json type: field_input: context field_instruction: question field_output: long_answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: trangtrannnnn/469ecbb5-810f-4a55-8290-cd005a7ce037 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/3e822cd8df57cb11_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c78acf73-ff92-4184-944e-ea8cd1f207da wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c78acf73-ff92-4184-944e-ea8cd1f207da warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 469ecbb5-810f-4a55-8290-cd005a7ce037 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9398 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0162 | 0.0080 | 200 | 1.9398 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robiulawaldev/efdfed61-7c3a-44e1-a090-0bc50402dcfd
robiulawaldev
2025-01-29T08:23:42Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-13b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-13b-hf-flash", "region:us" ]
null
2025-01-29T08:17:08Z
--- library_name: peft base_model: NousResearch/CodeLlama-13b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: efdfed61-7c3a-44e1-a090-0bc50402dcfd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-13b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3875808def965efa_train_data.json ds_type: json format: custom path: /workspace/input_data/3875808def965efa_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/efdfed61-7c3a-44e1-a090-0bc50402dcfd hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: constant max_steps: 55 micro_batch_size: 4 mlflow_experiment_name: /tmp/3875808def965efa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 29f091f6-5131-4ec0-8ff6-d9601393bcfa wandb_project: Birthday-SN56-37-Gradients-On-Demand wandb_run: your_name wandb_runid: 29f091f6-5131-4ec0-8ff6-d9601393bcfa warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # efdfed61-7c3a-44e1-a090-0bc50402dcfd This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 55 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.6760 | | 2.7273 | 0.0044 | 14 | 1.0290 | | 2.1148 | 0.0088 | 28 | 0.9007 | | 1.7686 | 0.0133 | 42 | 0.8420 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
great0001/7900eb84-fb09-4f07-a98e-f0b95036caee
great0001
2025-01-29T08:23:41Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-13b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-13b-hf-flash", "region:us" ]
null
2025-01-29T08:17:19Z
--- library_name: peft base_model: NousResearch/CodeLlama-13b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: 7900eb84-fb09-4f07-a98e-f0b95036caee results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-13b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3875808def965efa_train_data.json ds_type: json format: custom path: /workspace/input_data/3875808def965efa_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/7900eb84-fb09-4f07-a98e-f0b95036caee hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/3875808def965efa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 29f091f6-5131-4ec0-8ff6-d9601393bcfa wandb_project: Birthday-SN56-14-Gradients-On-Demand wandb_run: your_name wandb_runid: 29f091f6-5131-4ec0-8ff6-d9601393bcfa warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 7900eb84-fb09-4f07-a98e-f0b95036caee This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.5486 | 0.0003 | 1 | 1.7422 | | 5.3542 | 0.0041 | 13 | 1.3400 | | 3.8275 | 0.0082 | 26 | 1.0698 | | 4.5158 | 0.0123 | 39 | 0.9916 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nhungphammmmm/899ebe09-2d35-42cc-b053-2292c6867e48
nhungphammmmm
2025-01-29T08:23:39Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Math-1.5B", "base_model:adapter:unsloth/Qwen2.5-Math-1.5B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:57:55Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Math-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 899ebe09-2d35-42cc-b053-2292c6867e48 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Math-1.5B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3e822cd8df57cb11_train_data.json ds_type: json format: custom path: /workspace/input_data/3e822cd8df57cb11_train_data.json type: field_input: context field_instruction: question field_output: long_answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhungphammmmm/899ebe09-2d35-42cc-b053-2292c6867e48 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/3e822cd8df57cb11_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c78acf73-ff92-4184-944e-ea8cd1f207da wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c78acf73-ff92-4184-944e-ea8cd1f207da warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 899ebe09-2d35-42cc-b053-2292c6867e48 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.014 | 0.0080 | 200 | 1.9402 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Best000/94440577-7831-40c4-874b-89112f466816
Best000
2025-01-29T08:23:20Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-13b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-13b-hf-flash", "region:us" ]
null
2025-01-29T08:17:04Z
--- library_name: peft base_model: NousResearch/CodeLlama-13b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: 94440577-7831-40c4-874b-89112f466816 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-13b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3875808def965efa_train_data.json ds_type: json format: custom path: /workspace/input_data/3875808def965efa_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/94440577-7831-40c4-874b-89112f466816 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/3875808def965efa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 29f091f6-5131-4ec0-8ff6-d9601393bcfa wandb_project: Birthday-SN56-32-Gradients-On-Demand wandb_run: your_name wandb_runid: 29f091f6-5131-4ec0-8ff6-d9601393bcfa warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 94440577-7831-40c4-874b-89112f466816 This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.7422 | | 6.8527 | 0.0041 | 13 | 1.7196 | | 6.8357 | 0.0082 | 26 | 1.3318 | | 5.4547 | 0.0123 | 39 | 1.0949 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shibajustfor/3e961005-1b03-401e-bdf9-22af3665d041
shibajustfor
2025-01-29T08:23:08Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-13b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-13b-hf-flash", "region:us" ]
null
2025-01-29T08:16:43Z
--- library_name: peft base_model: NousResearch/CodeLlama-13b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: 3e961005-1b03-401e-bdf9-22af3665d041 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-13b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3875808def965efa_train_data.json ds_type: json format: custom path: /workspace/input_data/3875808def965efa_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: shibajustfor/3e961005-1b03-401e-bdf9-22af3665d041 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/3875808def965efa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 29f091f6-5131-4ec0-8ff6-d9601393bcfa wandb_project: Birthday-SN56-39-Gradients-On-Demand wandb_run: your_name wandb_runid: 29f091f6-5131-4ec0-8ff6-d9601393bcfa warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3e961005-1b03-401e-bdf9-22af3665d041 This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9882 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.7422 | | 6.5173 | 0.0041 | 13 | 1.2674 | | 4.9838 | 0.0082 | 26 | 1.0513 | | 4.1562 | 0.0123 | 39 | 0.9882 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adammandic87/e2cb7f23-cec2-4cbb-ac88-6985dd8c7233
adammandic87
2025-01-29T08:23:04Z
8
0
peft
[ "peft", "safetensors", "gpt_neo", "axolotl", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-1.3B", "base_model:adapter:EleutherAI/gpt-neo-1.3B", "license:mit", "region:us" ]
null
2025-01-29T08:21:52Z
--- library_name: peft license: mit base_model: EleutherAI/gpt-neo-1.3B tags: - axolotl - generated_from_trainer model-index: - name: e2cb7f23-cec2-4cbb-ac88-6985dd8c7233 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/gpt-neo-1.3B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ae14bfeb00663848_train_data.json ds_type: json format: custom path: /workspace/input_data/ae14bfeb00663848_train_data.json type: field_input: product_title field_instruction: text field_output: preds format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: adammandic87/e2cb7f23-cec2-4cbb-ac88-6985dd8c7233 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ae14bfeb00663848_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 768b7d8f-e163-4eb7-94e2-6e5f62199e26 wandb_project: Birthday-SN56-34-Gradients-On-Demand wandb_run: your_name wandb_runid: 768b7d8f-e163-4eb7-94e2-6e5f62199e26 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e2cb7f23-cec2-4cbb-ac88-6985dd8c7233 This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0024 | 1 | 3.6290 | | 14.4078 | 0.0316 | 13 | 3.3811 | | 13.3049 | 0.0633 | 26 | 2.8901 | | 12.0821 | 0.0949 | 39 | 2.7137 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Primeness/primeh4v10c2
Primeness
2025-01-29T08:18:09Z
26
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T07:45:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
clarxus/a08688d7-bc3e-47fd-a64e-a4070a7fe2b2
clarxus
2025-01-29T08:17:32Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Hermes-3-Llama-3.1-8B", "base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B", "license:llama3", "region:us" ]
null
2025-01-29T04:55:50Z
--- library_name: peft license: llama3 base_model: NousResearch/Hermes-3-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: a08688d7-bc3e-47fd-a64e-a4070a7fe2b2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Hermes-3-Llama-3.1-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 30529ea285fff6e5_train_data.json ds_type: json format: custom path: /workspace/input_data/30529ea285fff6e5_train_data.json type: field_input: article field_instruction: input field_output: clean_input format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: clarxus/a08688d7-bc3e-47fd-a64e-a4070a7fe2b2 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/30529ea285fff6e5_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 558bab3b-4762-449f-9904-9dc48b2dd138 wandb_project: Gradients-On-Seven wandb_run: your_name wandb_runid: 558bab3b-4762-449f-9904-9dc48b2dd138 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a08688d7-bc3e-47fd-a64e-a4070a7fe2b2 This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.1763 | | 1.2539 | 0.0028 | 9 | 1.0743 | | 0.9671 | 0.0056 | 18 | 0.8218 | | 0.8351 | 0.0085 | 27 | 0.6565 | | 0.5796 | 0.0113 | 36 | 0.5506 | | 0.5993 | 0.0141 | 45 | 0.4815 | | 0.5831 | 0.0169 | 54 | 0.4386 | | 0.4598 | 0.0197 | 63 | 0.4132 | | 0.3683 | 0.0225 | 72 | 0.3974 | | 0.2927 | 0.0254 | 81 | 0.3894 | | 0.4786 | 0.0282 | 90 | 0.3862 | | 0.3964 | 0.0310 | 99 | 0.3856 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
hongngo/588c2454-ae2b-4172-bda2-8b53ec4e28a0
hongngo
2025-01-29T08:14:23Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M", "base_model:adapter:unsloth/SmolLM-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:03:37Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M tags: - axolotl - generated_from_trainer model-index: - name: 588c2454-ae2b-4172-bda2-8b53ec4e28a0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-360M bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ac004a2a3ec8e832_train_data.json ds_type: json format: custom path: /workspace/input_data/ac004a2a3ec8e832_train_data.json type: field_input: title field_instruction: content field_output: summary1 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: hongngo/588c2454-ae2b-4172-bda2-8b53ec4e28a0 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ac004a2a3ec8e832_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 77344871-dc6c-43c2-89a7-28217f41b23c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 77344871-dc6c-43c2-89a7-28217f41b23c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 588c2454-ae2b-4172-bda2-8b53ec4e28a0 This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9084 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8767 | 0.0027 | 200 | 1.9084 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
bunnycore/Phi-4-ReasoningRP
bunnycore
2025-01-29T08:13:53Z
116
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:bunnycore/Phi-4-14B-1M-RRP-v1-lora", "base_model:merge:bunnycore/Phi-4-14B-1M-RRP-v1-lora", "base_model:bunnycore/Phi-4-Model-Stock-v4", "base_model:merge:bunnycore/Phi-4-Model-Stock-v4", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-28T12:46:38Z
--- license: mit library_name: transformers tags: - mergekit - merge base_model: - bunnycore/Phi-4-Model-Stock-v4 - bunnycore/Phi-4-14B-1M-RRP-v1-lora model-index: - name: Phi-4-ReasoningRP results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 67.36 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-ReasoningRP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 55.88 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-ReasoningRP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 44.34 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-ReasoningRP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 12.53 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-ReasoningRP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 15.14 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-ReasoningRP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.12 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-ReasoningRP name: Open LLM Leaderboard --- This model is Phi-4 with a reasoning fine-tuned LoRA applied. While it can follow a reasoning format, it's important to understand that its "thinking" isn't the same as more advanced reasoning models (like R1 or O1). Think of it as Phi-4 with a helpful reasoning boost. ## What can it do? This model is designed for roleplay and other reasoning-related tasks. It's not intended to be a replacement for specialized reasoning models; it has its own strengths and limitations. To activate the reasoning format, use the <think> tag within the system prompt. This will encourage the model to structure its response in a step-by-step or explanatory manner. ### Chat Template: ``` <|im_start|>system<|im_sep|>{system_prompt}<|im_end|> <|im_start|>user<|im_sep|>{user}<|im_end|> <|im_start|>assistant<|im_sep|> ``` ### Example System Prompt (with reasoning): You are a helpful assistant. ```<think>``` Let's break this down step by step. First, we need to consider... Then, we can look at... Finally, we arrive at the answer. ```</think>``` Strengths: - Capable of roleplay. - Can follow a reasoning format when prompted. - Based on the Phi-4 architecture. ### Benchmark: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63284f86cbc744f197050300/U9TcKlvryI9Xx_5uaUL4m.png) ## Merge Details ### Merge Method This model was merged using the Passthrough merge method using [bunnycore/Phi-4-Model-Stock-v4](https://huggingface.co/bunnycore/Phi-4-Model-Stock-v4) + [bunnycore/Phi-4-14B-1M-RRP-v1-lora](https://huggingface.co/bunnycore/Phi-4-14B-1M-RRP-v1-lora) as a base. ### Models Merged The following models were included in the merge: ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: bunnycore/Phi-4-Model-Stock-v4+bunnycore/Phi-4-14B-1M-RRP-v1-lora dtype: bfloat16 merge_method: passthrough models: - model: bunnycore/Phi-4-Model-Stock-v4+bunnycore/Phi-4-14B-1M-RRP-v1-lora ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Phi-4-ReasoningRP-details) | Metric |Value| |-------------------|----:| |Avg. |40.73| |IFEval (0-Shot) |67.36| |BBH (3-Shot) |55.88| |MATH Lvl 5 (4-Shot)|44.34| |GPQA (0-shot) |12.53| |MuSR (0-shot) |15.14| |MMLU-PRO (5-shot) |49.12|
robiulawaldev/ec3a841e-b02d-4983-a26c-e16f6324bacf
robiulawaldev
2025-01-29T08:13:05Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-13b-v1.5", "base_model:adapter:lmsys/vicuna-13b-v1.5", "license:llama2", "region:us" ]
null
2025-01-29T08:08:02Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-13b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: ec3a841e-b02d-4983-a26c-e16f6324bacf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-13b-v1.5 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 050404ebdd7019b8_train_data.json ds_type: json format: custom path: /workspace/input_data/050404ebdd7019b8_train_data.json type: field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/ec3a841e-b02d-4983-a26c-e16f6324bacf hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: constant max_steps: 55 micro_batch_size: 4 mlflow_experiment_name: /tmp/050404ebdd7019b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f98aeb00-1c68-46fd-a249-d65fd262ecb9 wandb_project: Birthday-SN56-37-Gradients-On-Demand wandb_run: your_name wandb_runid: f98aeb00-1c68-46fd-a249-d65fd262ecb9 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # ec3a841e-b02d-4983-a26c-e16f6324bacf This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8055 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 55 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0007 | 1 | 1.0210 | | 0.9643 | 0.0096 | 14 | 0.8458 | | 0.8184 | 0.0192 | 28 | 0.8218 | | 0.8612 | 0.0287 | 42 | 0.8055 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
thaffggg/dffbb83c-6ae9-48ba-b4ff-875a2e92be59
thaffggg
2025-01-29T08:12:31Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M", "base_model:adapter:unsloth/SmolLM-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:03:21Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M tags: - axolotl - generated_from_trainer model-index: - name: dffbb83c-6ae9-48ba-b4ff-875a2e92be59 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-360M bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ac004a2a3ec8e832_train_data.json ds_type: json format: custom path: /workspace/input_data/ac004a2a3ec8e832_train_data.json type: field_input: title field_instruction: content field_output: summary1 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: thaffggg/dffbb83c-6ae9-48ba-b4ff-875a2e92be59 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ac004a2a3ec8e832_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 77344871-dc6c-43c2-89a7-28217f41b23c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 77344871-dc6c-43c2-89a7-28217f41b23c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # dffbb83c-6ae9-48ba-b4ff-875a2e92be59 This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9079 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8774 | 0.0027 | 200 | 1.9079 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Best000/09fcb6e2-58a2-4f20-b9d1-109a20bde4c0
Best000
2025-01-29T08:12:10Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-13b-v1.5", "base_model:adapter:lmsys/vicuna-13b-v1.5", "license:llama2", "region:us" ]
null
2025-01-29T08:08:03Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-13b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: 09fcb6e2-58a2-4f20-b9d1-109a20bde4c0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-13b-v1.5 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 050404ebdd7019b8_train_data.json ds_type: json format: custom path: /workspace/input_data/050404ebdd7019b8_train_data.json type: field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Best000/09fcb6e2-58a2-4f20-b9d1-109a20bde4c0 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/050404ebdd7019b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f98aeb00-1c68-46fd-a249-d65fd262ecb9 wandb_project: Birthday-SN56-16-Gradients-On-Demand wandb_run: your_name wandb_runid: f98aeb00-1c68-46fd-a249-d65fd262ecb9 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 09fcb6e2-58a2-4f20-b9d1-109a20bde4c0 This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0007 | 1 | 1.0677 | | 1.0569 | 0.0089 | 13 | 0.9251 | | 0.8927 | 0.0178 | 26 | 0.8607 | | 0.876 | 0.0267 | 39 | 0.8405 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nathanialhunt/a7621f00-2a04-4aaf-888e-dfdbe043d9f9
nathanialhunt
2025-01-29T08:12:04Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-13b-v1.5", "base_model:adapter:lmsys/vicuna-13b-v1.5", "license:llama2", "region:us" ]
null
2025-01-29T08:08:02Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-13b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: a7621f00-2a04-4aaf-888e-dfdbe043d9f9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-13b-v1.5 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 050404ebdd7019b8_train_data.json ds_type: json format: custom path: /workspace/input_data/050404ebdd7019b8_train_data.json type: field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: nathanialhunt/a7621f00-2a04-4aaf-888e-dfdbe043d9f9 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/050404ebdd7019b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f98aeb00-1c68-46fd-a249-d65fd262ecb9 wandb_project: Birthday-SN56-5-Gradients-On-Demand wandb_run: your_name wandb_runid: f98aeb00-1c68-46fd-a249-d65fd262ecb9 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a7621f00-2a04-4aaf-888e-dfdbe043d9f9 This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0007 | 1 | 1.0677 | | 1.0369 | 0.0089 | 13 | 0.9203 | | 0.8723 | 0.0178 | 26 | 0.8556 | | 0.8706 | 0.0267 | 39 | 0.8391 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kostiantynk1205/d45e62a5-5647-4e6b-9e5c-afc7a53583e9
kostiantynk1205
2025-01-29T08:12:01Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-13b-v1.5", "base_model:adapter:lmsys/vicuna-13b-v1.5", "license:llama2", "region:us" ]
null
2025-01-29T08:07:59Z
--- library_name: peft license: llama2 base_model: lmsys/vicuna-13b-v1.5 tags: - axolotl - generated_from_trainer model-index: - name: d45e62a5-5647-4e6b-9e5c-afc7a53583e9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: lmsys/vicuna-13b-v1.5 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 050404ebdd7019b8_train_data.json ds_type: json format: custom path: /workspace/input_data/050404ebdd7019b8_train_data.json type: field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk1205/d45e62a5-5647-4e6b-9e5c-afc7a53583e9 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/050404ebdd7019b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f98aeb00-1c68-46fd-a249-d65fd262ecb9 wandb_project: Birthday-SN56-23-Gradients-On-Demand wandb_run: your_name wandb_runid: f98aeb00-1c68-46fd-a249-d65fd262ecb9 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d45e62a5-5647-4e6b-9e5c-afc7a53583e9 This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0007 | 1 | 1.0677 | | 1.0371 | 0.0089 | 13 | 0.9208 | | 0.8723 | 0.0178 | 26 | 0.8562 | | 0.8715 | 0.0267 | 39 | 0.8397 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mebook/models
mebook
2025-01-29T08:11:45Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-08T19:29:35Z
--- license: apache-2.0 ---
lesso13/dec01dc3-b8d1-4621-a332-9cdfe89cd205
lesso13
2025-01-29T08:10:36Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-7b-hf-flash", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:32:04Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: dec01dc3-b8d1-4621-a332-9cdfe89cd205 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf-flash bf16: auto chat_template: llama3 datasets: - data_files: - 682a834cc2a59bd6_train_data.json ds_type: json format: custom path: /workspace/input_data/682a834cc2a59bd6_train_data.json type: field_input: context field_instruction: question field_output: cleaned_atom format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso13/dec01dc3-b8d1-4621-a332-9cdfe89cd205 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/682a834cc2a59bd6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 313417c2-c5dc-47a4-9b02-d2be42090d8e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 313417c2-c5dc-47a4-9b02-d2be42090d8e warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # dec01dc3-b8d1-4621-a332-9cdfe89cd205 This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9305 | 0.0513 | 200 | 0.2728 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nhoxinh/a15ad0fe-907a-4ef1-9df1-91b6208261d6
nhoxinh
2025-01-29T08:10:09Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-360M", "base_model:adapter:unsloth/SmolLM-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:03:29Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-360M tags: - axolotl - generated_from_trainer model-index: - name: a15ad0fe-907a-4ef1-9df1-91b6208261d6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-360M bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ac004a2a3ec8e832_train_data.json ds_type: json format: custom path: /workspace/input_data/ac004a2a3ec8e832_train_data.json type: field_input: title field_instruction: content field_output: summary1 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhoxinh/a15ad0fe-907a-4ef1-9df1-91b6208261d6 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ac004a2a3ec8e832_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 77344871-dc6c-43c2-89a7-28217f41b23c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 77344871-dc6c-43c2-89a7-28217f41b23c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # a15ad0fe-907a-4ef1-9df1-91b6208261d6 This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8835 | 0.0027 | 200 | 1.9085 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
minhtrannnn/f26943de-a6f5-4a70-9039-bf86aa5157aa
minhtrannnn
2025-01-29T08:01:53Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-29T07:16:29Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: f26943de-a6f5-4a70-9039-bf86aa5157aa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 425476553ab111b0_train_data.json ds_type: json format: custom path: /workspace/input_data/425476553ab111b0_train_data.json type: field_input: Content field_instruction: Title field_output: Summary format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: minhtrannnn/f26943de-a6f5-4a70-9039-bf86aa5157aa hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/425476553ab111b0_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 6972c938-4c63-447c-ab05-b15cf2af5926 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 6972c938-4c63-447c-ab05-b15cf2af5926 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f26943de-a6f5-4a70-9039-bf86aa5157aa This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9902 | 0.0233 | 200 | 1.6909 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1