modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 18:27:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 18:25:26
card
stringlengths
11
1.01M
shibajustfor/0b8828f0-1359-48f5-92e7-5887ef998e05
shibajustfor
2025-01-31T08:01:44Z
5
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b", "base_model:adapter:unsloth/codegemma-7b", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:54:01Z
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b tags: - axolotl - generated_from_trainer model-index: - name: 0b8828f0-1359-48f5-92e7-5887ef998e05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codegemma-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - df637254d2930ff2_train_data.json ds_type: json format: custom path: /workspace/input_data/df637254d2930ff2_train_data.json type: field_input: '' field_instruction: prompt field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: shibajustfor/0b8828f0-1359-48f5-92e7-5887ef998e05 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/df637254d2930ff2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ae731b77-90f6-489c-a8d2-69167bce2830 wandb_project: Birthday-SN56-11-Gradients-On-Demand wandb_run: your_name wandb_runid: ae731b77-90f6-489c-a8d2-69167bce2830 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 0b8828f0-1359-48f5-92e7-5887ef998e05 This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9498 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.1307 | | 1.0824 | 0.0040 | 13 | 1.0394 | | 0.9829 | 0.0080 | 26 | 0.9763 | | 1.0237 | 0.0120 | 39 | 0.9498 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Jellon/Mistral-Small-24B-Instruct-2501-exl2-6bpw
Jellon
2025-01-31T08:01:37Z
19
0
vllm
[ "vllm", "safetensors", "mistral", "text-generation", "transformers", "conversational", "en", "fr", "de", "es", "it", "pt", "zh", "ja", "ru", "ko", "base_model:mistralai/Mistral-Small-24B-Instruct-2501", "base_model:quantized:mistralai/Mistral-Small-24B-Instruct-2501", "license:apache-2.0", "text-generation-inference", "6-bit", "exl2", "region:us" ]
text-generation
2025-01-31T06:57:45Z
--- language: - en - fr - de - es - it - pt - zh - ja - ru - ko license: apache-2.0 library_name: vllm inference: false base_model: mistralai/Mistral-Small-24B-Instruct-2501 extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. tags: - transformers --- 6bpw exl2 quant of: https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501 --- # Model Card for Mistral-Small-24B-Instruct-2501 Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501). Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized. Perfect for: - Fast response conversational agents. - Low latency function calling. - Subject matter experts via fine-tuning. - Local inference for hobbyists and organizations handling sensitive data. For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community. This release demonstrates our commitment to open source, serving as a strong base model. Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/). Model developper: Mistral AI Team ## Key Features - **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish. - **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting. - **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities. - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window:** A 32k context window. - **System Prompt:** Maintains strong adherence and support for system prompts. - **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark results ### Human evaluated benchmarks | Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini | |----------|-------------|--------------|---------------|------------| | Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 | | Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 | | Ties | 0.052 | 0.060 | 0.236 | 0.160 | | Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 | | Other is better | 0.156 | 0.172 | 0.296 | 0.312 | **Note**: - We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts. - Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model. - We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid. ### Publicly accesible benchmarks **Reasoning & Knowledge** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 | | gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 | **Math & Coding** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 | | math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 | **Instruction following** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 | | wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 | | arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 | | ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 | **Note**: - Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance ([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)). - Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13. ### Basic Instruct Template (V7-Tekken) ``` <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST] ``` *`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.* ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth*** ## Usage The model can be used with the following frameworks; - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vLLM) - [`transformers`](https://github.com/huggingface/transformers): See [here](#Transformers) ### vLLM We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt: ``` system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")""" ``` **_Installation_** Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4): ``` pip install --upgrade vllm ``` Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed: ``` pip install --upgrade mistral_common ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice ``` **Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. ```py import requests import json from datetime import datetime, timedelta url = "http://<your-server>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" messages = [ { "role": "system", "content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." }, { "role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French." }, ] data = {"model": model, "messages": messages} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Function calling Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Example</summary> ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to find the weather for, e.g. 'San Francisco'", }, "state": { "type": "string", "description": "The state abbreviation, e.g. 'CA' for California", }, "unit": { "type": "string", "description": "The unit for temperature", "enum": ["celsius", "fahrenheit"], }, }, "required": ["city", "state", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?", }, ] data = {"model": model, "messages": messages, "tools": tools} response = requests.post(url, headers=headers, data=json.dumps(data)) import ipdb; ipdb.set_trace() print(response.json()["choices"][0]["message"]["tool_calls"]) # [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}] ``` </details> #### Offline ```py from vllm import LLM from vllm.sampling_params import SamplingParams from datetime import datetime, timedelta SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." user_prompt = "Give me 5 non-formal ways to say 'See you later' in French." messages = [ { "role": "system", "content": SYSTEM_PROMPT }, { "role": "user", "content": user_prompt }, ] # note that running this model on GPU requires over 60 GB of GPU RAM llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=512, temperature=0.15) outputs = llm.chat(messages, sampling_params=sampling_params) print(outputs[0].outputs[0].text) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Transformers If you want to use Hugging Face transformers to generate text, you can do something like this. ```py from transformers import pipeline import torch messages = [ {"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16) chatbot(messages) ``` ### Ollama [Ollama](https://github.com/ollama/ollama) can run this model locally on MacOS, Windows and Linux. ``` ollama run mistral-small ``` 4-bit quantization (aliased to default): ``` ollama run mistral-small:24b-instruct-2501-q4_K_M ``` 8-bit quantization: ``` ollama run mistral-small:24b-instruct-2501-q8_0 ``` FP16: ``` ollama run mistral-small:24b-instruct-2501-fp16 ```
daniel40/3ebc1623-8736-436e-94db-12882bab5d4a
daniel40
2025-01-31T08:01:09Z
10
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b", "base_model:adapter:unsloth/codegemma-7b", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:53:32Z
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b tags: - axolotl - generated_from_trainer model-index: - name: 3ebc1623-8736-436e-94db-12882bab5d4a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codegemma-7b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - df637254d2930ff2_train_data.json ds_type: json format: custom path: /workspace/input_data/df637254d2930ff2_train_data.json type: field_input: '' field_instruction: prompt field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: daniel40/3ebc1623-8736-436e-94db-12882bab5d4a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/df637254d2930ff2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ae731b77-90f6-489c-a8d2-69167bce2830 wandb_project: Birthday-SN56-27-Gradients-On-Demand wandb_run: your_name wandb_runid: ae731b77-90f6-489c-a8d2-69167bce2830 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3ebc1623-8736-436e-94db-12882bab5d4a This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9486 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 1.1307 | | 1.09 | 0.0040 | 13 | 1.0473 | | 0.9909 | 0.0080 | 26 | 0.9784 | | 1.0252 | 0.0120 | 39 | 0.9486 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
great0001/e1e9d437-97fa-4ede-99f0-8d2002c08b86
great0001
2025-01-31T08:00:29Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2025-01-31T07:43:29Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: e1e9d437-97fa-4ede-99f0-8d2002c08b86 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/e1e9d437-97fa-4ede-99f0-8d2002c08b86 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: Birthday-SN56-14-Gradients-On-Demand wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e1e9d437-97fa-4ede-99f0-8d2002c08b86 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0001 | 1 | nan | | 0.0 | 0.0007 | 13 | nan | | 0.0 | 0.0015 | 26 | nan | | 0.0 | 0.0022 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_S-GGUF
roleplaiapp
2025-01-31T08:00:26Z
22
0
transformers
[ "transformers", "gguf", "3-bit", "70b", "Q3_K_S", "deepseek", "distill", "llama", "llama-cpp", "text-generation", "uncensored", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T07:58:38Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - 70b - Q3_K_S - deepseek - distill - gguf - llama - llama-cpp - text-generation - uncensored --- # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_S-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_S-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2` **Quantized File:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.Q3_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_S` ## Overview This is a GGUF Q3_K_S quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
nomadrp/tq-llama-binary-20each-ws-all-langs-2epochs
nomadrp
2025-01-31T07:59:59Z
18
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
null
2025-01-31T06:39:22Z
--- library_name: peft license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - trl - dpo - generated_from_trainer model-index: - name: tq-llama-binary-20each-ws-all-langs-2epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tq-llama-binary-20each-ws-all-langs-2epochs This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.45.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.20.3
daniel40/ad5d4445-c351-4cb7-9215-273691ec4f23
daniel40
2025-01-31T07:58:28Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:50:18Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: ad5d4445-c351-4cb7-9215-273691ec4f23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4dcb711299282333_train_data.json ds_type: json format: custom path: /workspace/input_data/4dcb711299282333_train_data.json type: field_input: phonemes field_instruction: text_description field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: daniel40/ad5d4445-c351-4cb7-9215-273691ec4f23 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/4dcb711299282333_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ab649ea5-2df5-460b-bb5c-9011a949e67b wandb_project: Birthday-SN56-31-Gradients-On-Demand wandb_run: your_name wandb_runid: ab649ea5-2df5-460b-bb5c-9011a949e67b warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # ad5d4445-c351-4cb7-9215-273691ec4f23 This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 0.9830 | | 0.0618 | 0.0101 | 50 | 0.0682 | | 0.0418 | 0.0203 | 100 | 0.0421 | | 0.0313 | 0.0304 | 150 | 0.0315 | | 0.0237 | 0.0406 | 200 | 0.0271 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Legalaz/03_llamboch2_02_55
Legalaz
2025-01-31T07:58:22Z
13
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2203.05482", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T07:56:10Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # top This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * /root/top2 * /root/top1 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /root/top2 parameters: weight: 0.8969 - model: /root/top1 parameters: weight: 0.0628 merge_method: linear dtype: bfloat16 ```
baby-dev/bc1bdc36-6283-4163-ab2e-c5253a0af888
baby-dev
2025-01-31T07:58:12Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:50:17Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: bc1bdc36-6283-4163-ab2e-c5253a0af888 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4dcb711299282333_train_data.json ds_type: json format: custom path: /workspace/input_data/4dcb711299282333_train_data.json type: field_input: phonemes field_instruction: text_description field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: baby-dev/bc1bdc36-6283-4163-ab2e-c5253a0af888 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/4dcb711299282333_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ab649ea5-2df5-460b-bb5c-9011a949e67b wandb_project: SN56-43 wandb_run: your_name wandb_runid: ab649ea5-2df5-460b-bb5c-9011a949e67b warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bc1bdc36-6283-4163-ab2e-c5253a0af888 This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 0.9844 | | 0.0599 | 0.0101 | 50 | 0.0665 | | 0.0431 | 0.0203 | 100 | 0.0418 | | 0.0329 | 0.0304 | 150 | 0.0323 | | 0.0238 | 0.0406 | 200 | 0.0255 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso15/6a185ea0-8544-4a87-8f48-3be4cdceb051
lesso15
2025-01-31T07:58:02Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:llama3", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:03:11Z
--- library_name: peft license: llama3 base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B tags: - axolotl - generated_from_trainer model-index: - name: 6a185ea0-8544-4a87-8f48-3be4cdceb051 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B bf16: auto chat_template: llama3 datasets: - data_files: - 423760bfd2fbfffa_train_data.json ds_type: json format: custom path: /workspace/input_data/423760bfd2fbfffa_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso15/6a185ea0-8544-4a87-8f48-3be4cdceb051 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/423760bfd2fbfffa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 84585b20-d892-48c7-a995-1238079422b0 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 84585b20-d892-48c7-a995-1238079422b0 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 6a185ea0-8544-4a87-8f48-3be4cdceb051 This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3248 | 0.0205 | 200 | 1.6431 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_M-GGUF
roleplaiapp
2025-01-31T07:57:53Z
85
0
transformers
[ "transformers", "gguf", "3-bit", "70b", "Q3_K_M", "deepseek", "distill", "llama", "llama-cpp", "text-generation", "uncensored", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T07:55:45Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - 70b - Q3_K_M - deepseek - distill - gguf - llama - llama-cpp - text-generation - uncensored --- # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_M-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_M-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2` **Quantized File:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
sniperfix/2b9a12c6-0326-4f47-ab13-75742dfbd91f
sniperfix
2025-01-31T07:57:04Z
12
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama_v1.1", "base_model:adapter:TinyLlama/TinyLlama_v1.1", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:19:00Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama_v1.1 tags: - axolotl - generated_from_trainer model-index: - name: 2b9a12c6-0326-4f47-ab13-75742dfbd91f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: TinyLlama/TinyLlama_v1.1 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f6627dfddf7998ee_train_data.json ds_type: json format: custom path: /workspace/input_data/f6627dfddf7998ee_train_data.json type: field_input: traj_0_response field_instruction: prompt field_output: traj_0_solution_0 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 256 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 32 gradient_checkpointing: true group_by_length: false hub_model_id: sniperfix/2b9a12c6-0326-4f47-ab13-75742dfbd91f hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj - o_proj - gate_proj - down_proj - up_proj lr_scheduler: cosine max_grad_norm: 2 max_steps: 90 micro_batch_size: 2 mlflow_experiment_name: /tmp/f6627dfddf7998ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1.0e-05 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 2048 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: indexjupri-sniper-country wandb_mode: online wandb_name: 41e012f9-ee25-49ae-abe0-b64021ea6e9d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 41e012f9-ee25-49ae-abe0-b64021ea6e9d warmup_steps: 20 weight_decay: 0.02 xformers_attention: false ``` </details><br> # 2b9a12c6-0326-4f47-ab13-75742dfbd91f This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - training_steps: 90 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0011 | 1 | 1.4610 | | 1.5429 | 0.0087 | 8 | 1.3601 | | 1.1108 | 0.0175 | 16 | 1.1245 | | 1.2916 | 0.0262 | 24 | 1.0249 | | 1.143 | 0.0350 | 32 | 0.9758 | | 1.009 | 0.0437 | 40 | 0.9339 | | 0.8677 | 0.0525 | 48 | 0.9071 | | 0.9548 | 0.0612 | 56 | 0.8886 | | 0.9609 | 0.0700 | 64 | 0.8789 | | 0.8574 | 0.0787 | 72 | 0.8704 | | 0.9691 | 0.0875 | 80 | 0.8683 | | 0.7984 | 0.0962 | 88 | 0.8671 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kostiantynk-out/d72aae4a-2d1c-456e-b06d-85972f1a68f9
kostiantynk-out
2025-01-31T07:55:31Z
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:50:16Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: d72aae4a-2d1c-456e-b06d-85972f1a68f9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4dcb711299282333_train_data.json ds_type: json format: custom path: /workspace/input_data/4dcb711299282333_train_data.json type: field_input: phonemes field_instruction: text_description field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk-out/d72aae4a-2d1c-456e-b06d-85972f1a68f9 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/4dcb711299282333_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ab649ea5-2df5-460b-bb5c-9011a949e67b wandb_project: Birthday-SN56-10-Gradients-On-Demand wandb_run: your_name wandb_runid: ab649ea5-2df5-460b-bb5c-9011a949e67b warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d72aae4a-2d1c-456e-b06d-85972f1a68f9 This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 1.0426 | | 0.7688 | 0.0026 | 13 | 0.3022 | | 0.2579 | 0.0053 | 26 | 0.1521 | | 0.1574 | 0.0079 | 39 | 0.1137 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
zzunyang/KLQD_law_gemma
zzunyang
2025-01-31T07:55:23Z
27
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:architectyou/law-gemma-2-ko-9b-it", "base_model:adapter:architectyou/law-gemma-2-ko-9b-it", "region:us" ]
null
2025-01-31T02:02:05Z
--- base_model: architectyou/law-gemma-2-ko-9b-it library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
brixeus/186b9937-680f-4d12-a6b9-698e7371df41
brixeus
2025-01-31T07:47:17Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "license:llama3", "region:us" ]
null
2025-01-31T07:36:59Z
--- library_name: peft license: llama3 base_model: DeepMount00/Llama-3-8b-Ita tags: - axolotl - generated_from_trainer model-index: - name: 186b9937-680f-4d12-a6b9-698e7371df41 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: DeepMount00/Llama-3-8b-Ita bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff701e66869152c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ff701e66869152c5_train_data.json type: field_instruction: src field_output: tgt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: brixeus/186b9937-680f-4d12-a6b9-698e7371df41 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/ff701e66869152c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 37e884fe-9938-432e-9e6b-d663af3f92e4 wandb_project: Gradients-On-Three wandb_run: your_name wandb_runid: 37e884fe-9938-432e-9e6b-d663af3f92e4 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 186b9937-680f-4d12-a6b9-698e7371df41 This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2052 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 73 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0412 | 1 | 2.0562 | | 1.9707 | 0.2887 | 7 | 1.8532 | | 1.5902 | 0.5773 | 14 | 1.4597 | | 1.228 | 0.8660 | 21 | 1.3228 | | 1.4281 | 1.1546 | 28 | 1.2710 | | 1.0993 | 1.4433 | 35 | 1.2520 | | 1.0009 | 1.7320 | 42 | 1.2434 | | 1.0141 | 2.0206 | 49 | 1.2145 | | 0.8322 | 2.3093 | 56 | 1.2048 | | 0.8458 | 2.5979 | 63 | 1.2047 | | 0.8266 | 2.8866 | 70 | 1.2052 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
gvo1112/task-3-microsoft-Phi-3.5-mini-instruct-1738309621
gvo1112
2025-01-31T07:47:05Z
60
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "region:us" ]
null
2025-01-31T07:47:01Z
--- base_model: microsoft/Phi-3.5-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
mrferr3t/93948e04-434d-41a0-a6ea-5a1a1d5280f5
mrferr3t
2025-01-31T07:45:58Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M-Instruct", "base_model:adapter:unsloth/SmolLM2-360M-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:31:29Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 93948e04-434d-41a0-a6ea-5a1a1d5280f5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-360M-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ed31b7df3268d6c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ed31b7df3268d6c5_train_data.json type: field_input: '' field_instruction: input field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/93948e04-434d-41a0-a6ea-5a1a1d5280f5 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/ed31b7df3268d6c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2ccd3dbf-7834-4a29-bd07-6df17c1f1f49 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 2ccd3dbf-7834-4a29-bd07-6df17c1f1f49 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 93948e04-434d-41a0-a6ea-5a1a1d5280f5 This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1095 | 0.0000 | 1 | 1.0542 | | 0.7826 | 0.0012 | 50 | 0.7808 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
nttx/a8e824be-72e3-41d8-9e1c-33fda2c3e56d
nttx
2025-01-31T07:44:51Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "license:llama3", "region:us" ]
null
2025-01-31T07:41:21Z
--- library_name: peft license: llama3 base_model: DeepMount00/Llama-3-8b-Ita tags: - axolotl - generated_from_trainer model-index: - name: a8e824be-72e3-41d8-9e1c-33fda2c3e56d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: DeepMount00/Llama-3-8b-Ita bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff701e66869152c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ff701e66869152c5_train_data.json type: field_instruction: src field_output: tgt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/a8e824be-72e3-41d8-9e1c-33fda2c3e56d hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/ff701e66869152c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|eot_id|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 37e884fe-9938-432e-9e6b-d663af3f92e4 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 37e884fe-9938-432e-9e6b-d663af3f92e4 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a8e824be-72e3-41d8-9e1c-33fda2c3e56d This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3129 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 49 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3069 | 0.9948 | 48 | 1.3136 | | 2.5664 | 1.0155 | 49 | 1.3129 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
EpistemeAI/Reasoning-Llama-3.1-CoT-RE1-NMT
EpistemeAI
2025-01-31T07:44:17Z
109
1
null
[ "safetensors", "llama", "dataset:AI-MO/NuminaMath-TIR", "dataset:bespokelabs/Bespoke-Stratos-17k", "license:apache-2.0", "region:us" ]
null
2025-01-29T05:51:48Z
--- datasets: - AI-MO/NuminaMath-TIR - bespokelabs/Bespoke-Stratos-17k license: apache-2.0 --- Upgrade version [EpistemeAI/Reasoning-Llama-3.1-CoT-RE1-NMT-V2] (https://huggingface.co/EpistemeAI/Reasoning-Llama-3.1-CoT-RE1-NMT-V2) ## Introduction Introducing Reasoning Llama 3.1: The Next Evolution in Conversational AI We are thrilled to unveil Reasoning Llama 3.1, the latest advancement in our suite of AI models. Building upon the robust foundation of the renowned Llama series, Reasoning Llama 3.1 introduces the groundbreaking Chain of Thought (CoT) capabilities, elevating its reasoning prowess to new heights. ## Key Features of Reasoning Llama 3.1: Enhanced Chain of Thought Reasoning: At the core of Reasoning Llama 3.1 lies its sophisticated CoT framework, enabling the model to perform multi-step reasoning with greater accuracy and coherence. This ensures more reliable and contextually appropriate responses, especially for complex queries that require logical progression. Conversational Excellence: Designed with interactivity in mind, Reasoning Llama 3.1 excels in maintaining engaging and fluid conversations. Whether it's casual dialogue or in-depth discussions, the model adapts seamlessly to various conversational styles, providing users with a natural and intuitive interaction experience. Instruction-Supervised Fine-Tuning: Leveraging advanced supervised fine-tuning techniques, Reasoning Llama 3.1 has been meticulously trained on diverse instructional data. This fine-tuning process enhances the model's ability to understand and execute user instructions with precision, making it an invaluable tool for a wide range of applications. Unsloth Integration: Incorporating Unsloth, our proprietary unsupervised learning framework, Reasoning Llama 3.1 benefits from continuous learning capabilities. This integration allows the model to adapt and improve over time, ensuring it remains up-to-date with evolving language patterns and user needs without the constant need for manual intervention. ## Why Choose Reasoning Llama 3.1? Reasoning Llama 3.1 stands out as a versatile and powerful AI solution tailored for both developers and end-users. Its combination of advanced reasoning, conversational intelligence, and adaptive learning mechanisms make it ideally suited for applications ranging from customer support and virtual assistants to educational tools and creative content generation. As we continue to push the boundaries of artificial intelligence, Reasoning Llama 3.1 exemplifies our commitment to delivering state-of-the-art models that empower users with intelligent, reliable, and user-friendly technology. Experience the future of conversational AI with Reasoning Llama 3.1 and unlock new possibilities in human-machine interaction. ## How to use Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "EpistemeAI/Reasoning-Llama-3.1-CoT-RE1-NMT" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a powerful AI math assistant"}, {"role": "user", "content": "Given the quadratic function $f(x)=ax^{2}+bx+c$ with its derivative $f′(x)$, where $f′(0) > 0$, and $f(x)\geqslant 0$ for any real number $x$, find the minimum value of $\frac{f(1)}{f′(0)}$."}, ] outputs = pipe( messages, max_new_tokens=2048, ) print(outputs[0]["generated_text"][-1]) ``` # Uploaded model - **Developed by:** EpistemeAI - **License:** apache-2.0 - **Finetuned from model :** EpistemeAI/Reasoning-Llama-3.1-CoT-RE1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## 5. Citation ``` @misc{EpistemeAI2025, title = {EpistemeAI}, author={Thomas Yiu}, year={2025}, } @misc{bespoke_stratos, author = {Bespoke Labs}, title = {Bespoke-Stratos: The unreasonable effectiveness of reasoning distillation}, howpublished = {https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation}, note = {Accessed: 2025-01-22}, year = {2025} } @misc{numina_math_datasets, author = {Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu}, title = {NuminaMath TIR}, year = {2024}, publisher = {Numina}, journal = {Hugging Face repository}, howpublished = {\url{[https://huggingface.co/AI-MO/NuminaMath-TIR](https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf)}} } ``` ## 6. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). # Reference/Inspired [Open-R1: a fully open reproduction of DeepSeek-R1](https://huggingface.co/blog/open-r1)
kostiantynk1205/c78d53f3-f1d9-459c-9563-6d0fbe300637
kostiantynk1205
2025-01-31T07:43:55Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "license:llama3", "region:us" ]
null
2025-01-31T07:42:49Z
--- library_name: peft license: llama3 base_model: DeepMount00/Llama-3-8b-Ita tags: - axolotl - generated_from_trainer model-index: - name: c78d53f3-f1d9-459c-9563-6d0fbe300637 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: DeepMount00/Llama-3-8b-Ita bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff701e66869152c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ff701e66869152c5_train_data.json type: field_instruction: src field_output: tgt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk1205/c78d53f3-f1d9-459c-9563-6d0fbe300637 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ff701e66869152c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 37e884fe-9938-432e-9e6b-d663af3f92e4 wandb_project: Birthday-SN56-23-Gradients-On-Demand wandb_run: your_name wandb_runid: 37e884fe-9938-432e-9e6b-d663af3f92e4 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # c78d53f3-f1d9-459c-9563-6d0fbe300637 This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0104 | 1 | 2.0874 | | 1.8192 | 0.1347 | 13 | 1.4435 | | 1.422 | 0.2694 | 26 | 1.3171 | | 1.2723 | 0.4041 | 39 | 1.2763 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q2_K-GGUF
roleplaiapp
2025-01-31T07:43:36Z
326
0
transformers
[ "transformers", "gguf", "2-bit", "70b", "Q2_K", "deepseek", "distill", "llama", "llama-cpp", "text-generation", "uncensored", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T07:42:07Z
--- library_name: transformers pipeline_tag: text-generation tags: - 2-bit - 70b - Q2_K - deepseek - distill - gguf - llama - llama-cpp - text-generation - uncensored --- # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q2_K-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q2_K-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2` **Quantized File:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.Q2_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q2_K` ## Overview This is a GGUF Q2_K quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
great0001/f2698803-aa6f-4d0f-ae24-6d5d709d5bd6
great0001
2025-01-31T07:39:59Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "license:llama3", "region:us" ]
null
2025-01-31T07:37:38Z
--- library_name: peft license: llama3 base_model: DeepMount00/Llama-3-8b-Ita tags: - axolotl - generated_from_trainer model-index: - name: f2698803-aa6f-4d0f-ae24-6d5d709d5bd6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: DeepMount00/Llama-3-8b-Ita bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff701e66869152c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ff701e66869152c5_train_data.json type: field_instruction: src field_output: tgt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: great0001/f2698803-aa6f-4d0f-ae24-6d5d709d5bd6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ff701e66869152c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 37e884fe-9938-432e-9e6b-d663af3f92e4 wandb_project: Mine-SN56-20-Gradients-On-Demand wandb_run: your_name wandb_runid: 37e884fe-9938-432e-9e6b-d663af3f92e4 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f2698803-aa6f-4d0f-ae24-6d5d709d5bd6 This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0052 | 1 | 2.0874 | | 1.74 | 0.0674 | 13 | 1.4070 | | 1.3448 | 0.1347 | 26 | 1.3911 | | 1.3826 | 0.2021 | 39 | 1.3476 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
cilooor/046b85c9-23cf-42fa-ad72-faea29e54f78
cilooor
2025-01-31T07:39:05Z
15
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama_v1.1", "base_model:adapter:TinyLlama/TinyLlama_v1.1", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:18:44Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama_v1.1 tags: - axolotl - generated_from_trainer model-index: - name: 046b85c9-23cf-42fa-ad72-faea29e54f78 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: TinyLlama/TinyLlama_v1.1 bf16: true chat_template: llama3 data_processes: 24 dataset_prepared_path: null datasets: - data_files: - f6627dfddf7998ee_train_data.json ds_type: json format: custom path: /workspace/input_data/f6627dfddf7998ee_train_data.json type: field_input: traj_0_response field_instruction: prompt field_output: traj_0_solution_0 format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 4 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: cilooor/046b85c9-23cf-42fa-ad72-faea29e54f78 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 7.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.07 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine lr_scheduler_warmup_steps: 50 max_grad_norm: 0.3 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/f6627dfddf7998ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.999 adam_epsilon: 1e-8 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 17333 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer total_train_batch_size: 32 train_batch_size: 8 train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 41e012f9-ee25-49ae-abe0-b64021ea6e9d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 41e012f9-ee25-49ae-abe0-b64021ea6e9d warmup_steps: 30 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 046b85c9-23cf-42fa-ad72-faea29e54f78 This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 17333 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7648 | 0.0005 | 1 | 1.3696 | | 1.1307 | 0.0273 | 50 | 0.9475 | | 1.0357 | 0.0547 | 100 | 0.8693 | | 0.9074 | 0.0820 | 150 | 0.8440 | | 0.9893 | 0.1093 | 200 | 0.8387 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kostiantynk/18f6a8e3-9c5b-4acf-9f82-5d4b91ac9b8c
kostiantynk
2025-01-31T07:39:02Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:adapter:DeepMount00/Llama-3-8b-Ita", "license:llama3", "region:us" ]
null
2025-01-31T07:37:48Z
--- library_name: peft license: llama3 base_model: DeepMount00/Llama-3-8b-Ita tags: - axolotl - generated_from_trainer model-index: - name: 18f6a8e3-9c5b-4acf-9f82-5d4b91ac9b8c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: DeepMount00/Llama-3-8b-Ita bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff701e66869152c5_train_data.json ds_type: json format: custom path: /workspace/input_data/ff701e66869152c5_train_data.json type: field_instruction: src field_output: tgt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk/18f6a8e3-9c5b-4acf-9f82-5d4b91ac9b8c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ff701e66869152c5_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 37e884fe-9938-432e-9e6b-d663af3f92e4 wandb_project: Birthday-SN56-7-Gradients-On-Demand wandb_run: your_name wandb_runid: 37e884fe-9938-432e-9e6b-d663af3f92e4 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 18f6a8e3-9c5b-4acf-9f82-5d4b91ac9b8c This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0104 | 1 | 2.0874 | | 1.8209 | 0.1347 | 13 | 1.4453 | | 1.4265 | 0.2694 | 26 | 1.3157 | | 1.2728 | 0.4041 | 39 | 1.2755 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adammandic87/bc1558dc-b7da-4aad-bc5e-ea57281facde
adammandic87
2025-01-31T07:36:21Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2025-01-31T07:19:09Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: bc1558dc-b7da-4aad-bc5e-ea57281facde results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: adammandic87/bc1558dc-b7da-4aad-bc5e-ea57281facde hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: Birthday-SN56-13-Gradients-On-Demand wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bc1558dc-b7da-4aad-bc5e-ea57281facde This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0001 | 1 | nan | | 0.0 | 0.0007 | 13 | nan | | 0.0 | 0.0015 | 26 | nan | | 0.0 | 0.0022 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adammandic87/f62fa779-f2a3-4e37-ade5-d772103b1717
adammandic87
2025-01-31T07:35:29Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2025-01-31T07:18:45Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: f62fa779-f2a3-4e37-ade5-d772103b1717 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ecd7cec85692169d_train_data.json ds_type: json format: custom path: /workspace/input_data/ecd7cec85692169d_train_data.json type: field_instruction: input_persona field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: adammandic87/f62fa779-f2a3-4e37-ade5-d772103b1717 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ecd7cec85692169d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 wandb_project: Birthday-SN56-34-Gradients-On-Demand wandb_run: your_name wandb_runid: 7bdc132e-e198-4b8f-bee8-34caa4c4cbb2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f62fa779-f2a3-4e37-ade5-d772103b1717 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | nan | | 0.2605 | 0.0007 | 13 | nan | | 0.0 | 0.0015 | 26 | nan | | 2.3517 | 0.0022 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
beast33/902a5079-22c8-4d77-a4f7-edade50bdf6d
beast33
2025-01-31T07:33:10Z
7
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2b-it", "base_model:adapter:unsloth/gemma-2b-it", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:31:30Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-2b-it tags: - axolotl - generated_from_trainer model-index: - name: 902a5079-22c8-4d77-a4f7-edade50bdf6d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 938e7b961a3fae54_train_data.json ds_type: json format: custom path: /workspace/input_data/938e7b961a3fae54_train_data.json type: field_input: choices field_instruction: full_prompt field_output: example format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: beast33/902a5079-22c8-4d77-a4f7-edade50bdf6d hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/938e7b961a3fae54_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 264a9c6b-5cbc-436b-8c95-a81e899b2353 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 264a9c6b-5cbc-436b-8c95-a81e899b2353 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 902a5079-22c8-4d77-a4f7-edade50bdf6d This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 21 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0007 | 1.0 | 21 | 0.0005 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
YMEA/Pathe-asr-LenaData-V0
YMEA
2025-01-31T07:32:38Z
25
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "bam", "dataset:YMEA/lena_audio", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-01-31T03:17:15Z
--- library_name: transformers language: - bam license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - YMEA/lena_audio model-index: - name: Whisper Bambara-Bambara results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Bambara-Bambara This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the BambaraAsr dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
DINGOLANI/distilbert-ner-v2
DINGOLANI
2025-01-31T07:28:49Z
45
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-01-31T07:28:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
beast33/07ddc2fe-b25d-4f40-b00e-877485e5cad1
beast33
2025-01-31T07:28:47Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-135M-Instruct", "base_model:adapter:unsloth/SmolLM-135M-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T07:18:03Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-135M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 07ddc2fe-b25d-4f40-b00e-877485e5cad1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-135M-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ee4f88b0cc4f0b38_train_data.json ds_type: json format: custom path: /workspace/input_data/ee4f88b0cc4f0b38_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: beast33/07ddc2fe-b25d-4f40-b00e-877485e5cad1 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/ee4f88b0cc4f0b38_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c4bd7646-0b33-4f1c-9b9b-c3c00a111dab wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c4bd7646-0b33-4f1c-9b9b-c3c00a111dab warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 07ddc2fe-b25d-4f40-b00e-877485e5cad1 This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.4684 | 0.0655 | 200 | 3.2338 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
shibajustfor/39166851-a1e5-424c-aa59-17f916585b99
shibajustfor
2025-01-31T07:28:33Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-7b-hf-flash", "base_model:adapter:NousResearch/CodeLlama-7b-hf-flash", "region:us" ]
null
2025-01-31T07:27:18Z
--- library_name: peft base_model: NousResearch/CodeLlama-7b-hf-flash tags: - axolotl - generated_from_trainer model-index: - name: 39166851-a1e5-424c-aa59-17f916585b99 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/CodeLlama-7b-hf-flash bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ef066a96964aba8a_train_data.json ds_type: json format: custom path: /workspace/input_data/ef066a96964aba8a_train_data.json type: field_instruction: title field_output: description format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: shibajustfor/39166851-a1e5-424c-aa59-17f916585b99 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/ef066a96964aba8a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7cf2646b-3084-4458-ab3f-4af8618983fd wandb_project: Birthday-SN56-38-Gradients-On-Demand wandb_run: your_name wandb_runid: 7cf2646b-3084-4458-ab3f-4af8618983fd warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 39166851-a1e5-424c-aa59-17f916585b99 This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0040 | 1 | 2.4032 | | 8.1777 | 0.0519 | 13 | 1.7587 | | 6.5788 | 0.1038 | 26 | 1.4792 | | 5.7405 | 0.1557 | 39 | 1.3818 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
qingy2024/Qwen2.5-Coder-Draft-1.5B-Instruct
qingy2024
2025-01-31T07:27:53Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T05:56:33Z
--- library_name: transformers base_model: - Qwen/Qwen2.5-Coder-1.5B-Instruct --- # Qwen2.5-Coder-Draft-1.5B-Instruct A draft model suitable for [Qwen2.5 Coder 32B Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) It uses a vocabulary size of 152064, which is the same as Qwen2.5 Coder 32B Instruct (can be used in vLLM directly without any hack)
tensorwa/dp_mg_h1_01
tensorwa
2025-01-31T07:27:53Z
24
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Peacoc/chatml_2test43", "base_model:finetune:Peacoc/chatml_2test43", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T07:25:23Z
--- base_model: - itorgov/model-1738289983 - Peacoc/chatml_2test43 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [itorgov/model-1738289983](https://huggingface.co/itorgov/model-1738289983) * [Peacoc/chatml_2test43](https://huggingface.co/Peacoc/chatml_2test43) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: itorgov/model-1738289983 layer_range: [0, 32] - model: Peacoc/chatml_2test43 layer_range: [0, 32] merge_method: slerp base_model: itorgov/model-1738289983 parameters: t: - filter: self_attn value: 0.98 - filter: mlp value: 0.99 - value: 1 dtype: bfloat16 ```
ancient41/19d65686-912c-4288-a5c8-82174fb2d56c
ancient41
2025-01-31T07:26:12Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T07:25:36Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 19d65686-912c-4288-a5c8-82174fb2d56c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-0.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 09bdae8113c1b1e3_train_data.json ds_type: json format: custom path: /workspace/input_data/09bdae8113c1b1e3_train_data.json type: field_instruction: inputs field_output: targets format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: ancient41/19d65686-912c-4288-a5c8-82174fb2d56c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/09bdae8113c1b1e3_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b1e9a00c-aacb-4b8d-8b7b-ef64c7ac8d32 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b1e9a00c-aacb-4b8d-8b7b-ef64c7ac8d32 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 19d65686-912c-4288-a5c8-82174fb2d56c This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1501 | 0.4 | 1 | 0.9725 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Razvan1974/Jimi
Razvan1974
2025-01-31T07:25:08Z
22
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T07:04:43Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Jimi --- # Jimi <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Jimi` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Razvan1974/Jimi', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
ancient41/f6fd0cd1-e0a0-4ad8-bdd0-b39e0ac89ff6
ancient41
2025-01-31T07:24:23Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "region:us" ]
null
2025-01-31T05:15:50Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: f6fd0cd1-e0a0-4ad8-bdd0-b39e0ac89ff6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dltjdgh0928/test_instruction bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 445036244439be21_train_data.json ds_type: json format: custom path: /workspace/input_data/445036244439be21_train_data.json type: field_input: new_response field_instruction: prompt field_output: org_response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: ancient41/f6fd0cd1-e0a0-4ad8-bdd0-b39e0ac89ff6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/445036244439be21_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8d4144fc-9ff0-40f6-938c-971bb0af2635 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8d4144fc-9ff0-40f6-938c-971bb0af2635 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f6fd0cd1-e0a0-4ad8-bdd0-b39e0ac89ff6 This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.1204 | 0.0001 | 1 | 1.1771 | | 3.5574 | 0.0056 | 50 | 0.7825 | | 3.665 | 0.0112 | 100 | 0.7170 | | 3.6566 | 0.0169 | 150 | 0.6775 | | 3.6301 | 0.0225 | 200 | 0.6707 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
InsultedByMathematics/alpha_1e-2-beta_1e-2
InsultedByMathematics
2025-01-31T07:21:59Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T07:17:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abaddon182/cdedae3a-3953-41ed-acb9-287e5ba6a04c
abaddon182
2025-01-31T07:21:42Z
8
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:jhflow/mistral7b-lora-multi-turn-v2", "base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2", "region:us" ]
null
2025-01-31T06:54:16Z
--- library_name: peft base_model: jhflow/mistral7b-lora-multi-turn-v2 tags: - axolotl - generated_from_trainer model-index: - name: cdedae3a-3953-41ed-acb9-287e5ba6a04c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: jhflow/mistral7b-lora-multi-turn-v2 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - bd759e5c8d2b027f_train_data.json ds_type: json format: custom path: /workspace/input_data/bd759e5c8d2b027f_train_data.json type: field_input: answers field_instruction: topic field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: abaddon182/cdedae3a-3953-41ed-acb9-287e5ba6a04c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/bd759e5c8d2b027f_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3217968f-95e4-42f6-ab2b-878e655e1370 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3217968f-95e4-42f6-ab2b-878e655e1370 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # cdedae3a-3953-41ed-acb9-287e5ba6a04c This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1080 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.9483 | 0.0108 | 1 | 2.2484 | | 5.1298 | 0.5420 | 50 | 1.2160 | | 2.4199 | 1.0840 | 100 | 1.1514 | | 2.3623 | 1.6260 | 150 | 1.1195 | | 1.2455 | 2.1680 | 200 | 1.1080 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
InsultedByMathematics/alpha_1e-3-beta_1e-2
InsultedByMathematics
2025-01-31T07:21:04Z
13
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T07:16:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
beast33/c7d68f13-7fb1-4ded-a461-ea16244e38e8
beast33
2025-01-31T07:17:13Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:jhflow/mistral7b-lora-multi-turn-v2", "base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T06:46:18Z
--- library_name: peft base_model: jhflow/mistral7b-lora-multi-turn-v2 tags: - axolotl - generated_from_trainer model-index: - name: c7d68f13-7fb1-4ded-a461-ea16244e38e8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: jhflow/mistral7b-lora-multi-turn-v2 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - bd759e5c8d2b027f_train_data.json ds_type: json format: custom path: /workspace/input_data/bd759e5c8d2b027f_train_data.json type: field_input: answers field_instruction: topic field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: beast33/c7d68f13-7fb1-4ded-a461-ea16244e38e8 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/bd759e5c8d2b027f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3217968f-95e4-42f6-ab2b-878e655e1370 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3217968f-95e4-42f6-ab2b-878e655e1370 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # c7d68f13-7fb1-4ded-a461-ea16244e38e8 This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 185 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.0022 | 0.9986 | 184 | 1.1373 | | 4.9826 | 1.0041 | 185 | 1.1200 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
havinash-ai/bbe1101f-5c1b-444f-8b48-67bfd058899b
havinash-ai
2025-01-31T07:11:29Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:llama3", "region:us" ]
null
2025-01-31T07:01:55Z
--- library_name: peft license: llama3 base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B tags: - axolotl - generated_from_trainer model-index: - name: bbe1101f-5c1b-444f-8b48-67bfd058899b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 423760bfd2fbfffa_train_data.json ds_type: json format: custom path: /workspace/input_data/423760bfd2fbfffa_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: havinash-ai/bbe1101f-5c1b-444f-8b48-67bfd058899b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/423760bfd2fbfffa_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 84585b20-d892-48c7-a995-1238079422b0 wandb_project: Mine-SN56-2-Gradients-On-Demand wandb_run: your_name wandb_runid: 84585b20-d892-48c7-a995-1238079422b0 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bbe1101f-5c1b-444f-8b48-67bfd058899b This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 2.2211 | | 2.1075 | 0.0007 | 13 | 1.8652 | | 2.0234 | 0.0013 | 26 | 1.7669 | | 1.9285 | 0.0020 | 39 | 1.7416 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso01/f6c2b613-3b40-4dc1-8332-b21dbc57874f
lesso01
2025-01-31T07:08:38Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:adapter:berkeley-nest/Starling-LM-7B-alpha", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:18:34Z
--- library_name: peft license: apache-2.0 base_model: berkeley-nest/Starling-LM-7B-alpha tags: - axolotl - generated_from_trainer model-index: - name: f6c2b613-3b40-4dc1-8332-b21dbc57874f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: berkeley-nest/Starling-LM-7B-alpha bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - dffa8fc58ce66dc6_train_data.json ds_type: json format: custom path: /workspace/input_data/dffa8fc58ce66dc6_train_data.json type: field_instruction: title field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso01/f6c2b613-3b40-4dc1-8332-b21dbc57874f hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/dffa8fc58ce66dc6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a wandb_project: new-01-29 wandb_run: your_name wandb_runid: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f6c2b613-3b40-4dc1-8332-b21dbc57874f This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0972 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso01/8deacef0-d351-4833-996a-a52abe45292d
lesso01
2025-01-31T07:07:39Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "region:us" ]
null
2025-01-31T05:14:42Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: 8deacef0-d351-4833-996a-a52abe45292d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dltjdgh0928/test_instruction bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 445036244439be21_train_data.json ds_type: json format: custom path: /workspace/input_data/445036244439be21_train_data.json type: field_input: new_response field_instruction: prompt field_output: org_response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso01/8deacef0-d351-4833-996a-a52abe45292d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/445036244439be21_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8d4144fc-9ff0-40f6-938c-971bb0af2635 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 8d4144fc-9ff0-40f6-938c-971bb0af2635 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 8deacef0-d351-4833-996a-a52abe45292d This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0056 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Sourjayon/DeepSeek-R1-ForumNXT
Sourjayon
2025-01-31T07:04:36Z
34
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T06:59:18Z
--- base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Sourjayon - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mrcuddle/Dark-Hermes3-Llama3.2-3B
mrcuddle
2025-01-31T07:03:58Z
515
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "Llama-3", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "roleplaying", "chat", "conversational", "en", "base_model:NousResearch/Hermes-3-Llama-3.2-3B", "base_model:finetune:NousResearch/Hermes-3-Llama-3.2-3B", "license:llama3", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-08T12:19:09Z
--- language: - en license: llama3 tags: - Llama-3 - instruct - finetune - chatml - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - roleplaying - chat base_model: NousResearch/Hermes-3-Llama-3.2-3B widget: - example_title: Hermes 3 messages: - role: system content: >- You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: >- Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. model-index: - name: mrcuddle/Dark-Hermes3-Llama3.2-3B results: - task: type: text-generation name: Text Generation dataset: type: lambada_openai name: LAMBADA OpenAI config: default split: test metrics: - type: accuracy value: 0.6837 name: Accuracy config: none args: n-shot: 0 stderr: 0.0065 - type: perplexity value: 3.7577 name: Perplexity config: none args: n-shot: 0 stderr: 0.0933 library_name: transformers --- # Model Card "Dark-Hermes3-Llama3.2-3B" is a fine-tuned version of NousResearch's Hermes 3B. ## Training Details Base Model: - Hermes 3B by NousResearch Fine-Tuning Datasets: - Synthetic-Dark-RP - Luminous_Opus - Synthetic-RP Tools Used: - AutoTrain - Axolotl
nhung03/11bc8626-8b9a-4ebf-af18-ecf7e1aa88d9
nhung03
2025-01-31T07:00:38Z
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "license:llama3", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T06:21:19Z
--- library_name: peft license: llama3 base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 tags: - axolotl - generated_from_trainer model-index: - name: 11bc8626-8b9a-4ebf-af18-ecf7e1aa88d9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 594acf1a1ccb4752_train_data.json ds_type: json format: custom path: /workspace/input_data/594acf1a1ccb4752_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhung03/11bc8626-8b9a-4ebf-af18-ecf7e1aa88d9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/594acf1a1ccb4752_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: aabd8aec-07d3-4064-82eb-acdd95e34794 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: aabd8aec-07d3-4064-82eb-acdd95e34794 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 11bc8626-8b9a-4ebf-af18-ecf7e1aa88d9 This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3598 | 0.4673 | 200 | 0.3367 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF
mradermacher
2025-01-31T07:00:16Z
236
1
transformers
[ "transformers", "gguf", "en", "base_model:Nisk36/DPO_normalchosen_afterSFT_qwen", "base_model:quantized:Nisk36/DPO_normalchosen_afterSFT_qwen", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T04:53:39Z
--- base_model: Nisk36/DPO_normalchosen_afterSFT_qwen language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/Nisk36/DPO_normalchosen_afterSFT_qwen <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/DPO_normalchosen_afterSFT_qwen-GGUF/resolve/main/DPO_normalchosen_afterSFT_qwen.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
lesso13/bfc0776a-c2df-4534-8c6e-7b2a808b5e2c
lesso13
2025-01-31T06:58:41Z
6
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "base_model:adapter:EleutherAI/pythia-1b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T06:47:59Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-1b tags: - axolotl - generated_from_trainer model-index: - name: bfc0776a-c2df-4534-8c6e-7b2a808b5e2c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-1b bf16: auto chat_template: llama3 datasets: - data_files: - 0344751d9f880319_train_data.json ds_type: json format: custom path: /workspace/input_data/0344751d9f880319_train_data.json type: field_input: phonemes field_instruction: text field_output: text_description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso13/bfc0776a-c2df-4534-8c6e-7b2a808b5e2c hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/0344751d9f880319_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f2cede56-40e6-4279-be11-96fdf946d3ea wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f2cede56-40e6-4279-be11-96fdf946d3ea warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # bfc0776a-c2df-4534-8c6e-7b2a808b5e2c This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.9268 | 0.1427 | 200 | 1.0621 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Anna567/clf-v13
Anna567
2025-01-31T06:58:31Z
160
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-12-11T17:22:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roleplaiapp/Qwen2.5-7B-olm-v1.4-i1-Q3_K_M-GGUF
roleplaiapp
2025-01-31T06:58:22Z
5
0
transformers
[ "transformers", "gguf", "3-bit", "Q3_K_M", "llama-cpp", "olm", "qwen25", "text-generation", "v14", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-01-31T06:58:05Z
--- library_name: transformers pipeline_tag: text-generation tags: - 3-bit - Q3_K_M - gguf - llama-cpp - olm - qwen25 - text-generation - v14 --- # roleplaiapp/Qwen2.5-7B-olm-v1.4-i1-Q3_K_M-GGUF **Repo:** `roleplaiapp/Qwen2.5-7B-olm-v1.4-i1-Q3_K_M-GGUF` **Original Model:** `Qwen2.5-7B-olm-v1.4-i1` **Quantized File:** `Qwen2.5-7B-olm-v1.4.i1-Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of Qwen2.5-7B-olm-v1.4-i1 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
RomainDarous/directTwoEpoch_additivePooling_randomInit_mistranslationModel
RomainDarous
2025-01-31T06:57:19Z
32
0
sentence-transformers
[ "sentence-transformers", "safetensors", "distilbert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4460010", "loss:CoSENTLoss", "dataset:RomainDarous/corrupted_os_by_language", "arxiv:1908.10084", "base_model:RomainDarous/directOneEpoch_additivePooling_randomInit_mistranslationModel", "base_model:finetune:RomainDarous/directOneEpoch_additivePooling_randomInit_mistranslationModel", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-01-31T06:54:05Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4460010 - loss:CoSENTLoss base_model: RomainDarous/mistranslation_model widget: - source_sentence: Malformed target specific variable definition sentences: - Hedefe özgü değişken tanımı bozuk - Kan alle data in die gids lees - "слава Украине! героям слава!\uFEFF" - source_sentence: Can't write an inode bitmap sentences: - Skontrolujte stav aktualizácií alebo to skúste znova neskôr. - Malsukcesis skribi i nodan bitmapon - Zastępuje wersję GL obsługiwaną przez sterownik - source_sentence: Optimize soft proofing color transformations sentences: - 'arkadaslar biz artik her an kirmizi kart yiyecek,bencil,pas yapamayan,isabetsiz orta yapani istemiyoruz. sozde efsaneniz bu sezon Besiktasa en cok zarar verenlerden biriydi. kendini dusunmeden once Besiktasi dusunecek adam lazim bize. o yuzden #GoHomeQuaresma' - Yav bizim dedikodusunu yaptığımız insanın bile bi vizyonu var. Senin hakkında neden oturup konuşalım? - Ik ben een transgender. - source_sentence: 'Pass 1: Checking @is, @bs, and sizes' sentences: - Bu adam cidden kurabiye gibi ben bunu çayın yanında yerim - sagnat. errada. invisible. justificació. idioma - Wilt u echt de primaire sleutel verplaatsen? (j N) - source_sentence: Search for matching log entries sentences: - quem te lembra? caralho tô assustada aqui kkkkk - sendotasunik gabeko\ egoera bistaratuko den ala ez adierazten du - En aquest cas, hem d'incloure les imatges del contenidor )sr iov per a càrregues de treball de telco (per exemple, com a referència, es podrien obtenir des de valors de helm chart) datasets: - RomainDarous/corrupted_os_by_language pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on RomainDarous/mistranslation_model results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts eval type: sts-eval metrics: - type: pearson_cosine value: 0.9710609371133431 name: Pearson Cosine - type: spearman_cosine value: 0.8649014548937625 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.9711789729084428 name: Pearson Cosine - type: spearman_cosine value: 0.8649041654024111 name: Spearman Cosine --- # SentenceTransformer based on RomainDarous/mistranslation_model This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [RomainDarous/mistranslation_model](https://huggingface.co/RomainDarous/mistranslation_model) on the [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [RomainDarous/mistranslation_model](https://huggingface.co/RomainDarous/mistranslation_model) <!-- at revision c4195c72cbbd0069325cbd7e86ed2f3ec2b2cbd9 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 512 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): MultiHeadGeneralizedPooling( (P): ModuleList( (0-7): 8 x Linear(in_features=768, out_features=96, bias=True) ) (W1): ModuleList( (0-7): 8 x Linear(in_features=96, out_features=384, bias=True) ) (W2): ModuleList( (0-7): 8 x Linear(in_features=384, out_features=96, bias=True) ) ) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RomainDarous/directTwoEpoch_additivePooling_randomInit_mistranslationModel") # Run inference sentences = [ 'Search for matching log entries', 'quem te lembra? caralho tô assustada aqui kkkkk', 'sendotasunik gabeko\\ egoera bistaratuko den ala ez adierazten du', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 512] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-eval` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-eval | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.9711 | 0.9712 | | **spearman_cosine** | **0.8649** | **0.8649** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 6 tokens</li><li>mean: 18.49 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 30.77 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|:---------------| | <code>Check spelling. Print the document. Show completion window. General. Show help</code> | <code>Kontrolli õigekirja. присоединяюсь. </code> | <code>0</code> | | <code>EXIF not supported for this file format.</code> | <code>Šiam failo formatui EXIF nepalaikomas.</code> | <code>1</code> | | <code>This package includes the documentation for texlive everyhook</code> | <code>Paket ini menyertakan dokumentasi untuk texlive everyhook</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 5 tokens</li><li>mean: 17.92 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 31.1 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Could not identify the current seat.</code> | <code> 天天花着男人的钱还这这创造新词汇男权你可真牛批,你也就这一出了一问男权,就说是我是吧,到现在我也没听到你给我们讲的男权,你也就是在网上喷喷,现实走道都不敢探头自卑,你现实要把你女权的劲拿出来总低啥头,您老应该去国家教育局把男权加上是吧,你们女权天天说自己生活不好没地位,给你们地位了你们能干啥?用你们的女权打到全世界男性是吧,能相出男权这一词您老也是人才呀,是不是庆幸自己是个女的,活在自己想想的世界里不觉得孤单吗,假象有男权是吧,自己假象和男权还说自己不是田园女权,田园女权能连自己都骂说自己妈是驴爸是大鼎的也是奇葩呀,那我们国家大肆宣扬过你们这么田园女权吗,国家要的是女性人群自主自理,你们可好看看你们女权干的啥事,给你们女权地位高了,看看你们女权干的事n绿地集团高管怎么都不说呀,人家可是有钱有地位,也不是我们说三从四德洗衣做饭你们女权会吗?,那我问问你们女权干过啥惊天大事,还甩锅给孔子,还封建社会,那我问问你们女权在福利面前为啥说自己是女性呀不是社会主义社会吗不应该男女平等吗,天天自己也不知道是不是抱个手机天天欧巴欧巴,你家那位要是不陪你看一会就会问你是不是不爱我了是吧大姐,您老也就赚这白菜钱操心国家事,中国五千年的历史被您老一句否决,还嘲讽人家日本女性,好意思说自己不是女权,三从四德流传这么久到您这变成日本文化了,我就想问问男权您老是怎么想的,那你问孔子老人家呗为什么女人要三从四德,我说的是女权你干嘛自己对号入座,连中华人民传承的东西都不认跟我这谈男权,还男权您老给我举个例子呗,让我们男权听听都是h啥,这些不都是你们女权的标准吗?,还男权,您老醒醒吧这里是现实,不是你的公主世界,总觉得自己多么多么重要,地球没你是不能转了还是人类要灭亡呀,我真的想问一句你给我找一条男权的新闻,咋了我们男人不能提女权呗你老授权了呗,那我们谈论田园女权你老对号入座干嘛,天天过节要礼物,还嫌弃自己男朋友没有钱,我寻思你找个有钱人包养你呗,对了有钱人怎么可能看上你这种女权的呢,还要孩子跟女方姓我也没看见你没跟你妈姓呀,年年过节男人给你们送礼物你们女人给男人送过礼物吗?,一问我不是陪着他吗我对他说我爱你了这不是最好的礼物吗?,男人只要不送礼物就是不爱你们了呗,人家国际女权讲的男人能做的我们女人也能做,田园女权男人能做的我们女人为啥要做,还男权我笑了,以前结婚几头牛换个衣服原装的,现在几十万彩...</code> | <code>0</code> | | <code>Undoing Date and Time Adjustment</code> | <code>正在取消日期和时间调整</code> | <code>1</code> | | <code>Dependency package for gsl_2_6 gnu hpc</code> | <code>Pacotes de desenvolvimento do KDE</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | corrupted open os by language loss | sts-eval_spearman_cosine | sts-test_spearman_cosine | |:-----:|:-----:|:-------------:|:----------------------------------:|:------------------------:|:------------------------:| | 1.0 | 55751 | 0.8489 | 0.6726 | 0.8649 | 0.8649 | ### Framework Versions - Python: 3.11.10 - Sentence Transformers: 3.3.1 - Transformers: 4.48.1 - PyTorch: 2.3.1+cu121 - Accelerate: 1.3.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
earnxus/8b01daf6-a520-4c39-9771-116810237924
earnxus
2025-01-31T06:56:10Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T05:13:52Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: 8b01daf6-a520-4c39-9771-116810237924 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dltjdgh0928/test_instruction bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 445036244439be21_train_data.json ds_type: json format: custom path: /workspace/input_data/445036244439be21_train_data.json type: field_input: new_response field_instruction: prompt field_output: org_response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: true hub_model_id: earnxus/8b01daf6-a520-4c39-9771-116810237924 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/445036244439be21_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 8d4144fc-9ff0-40f6-938c-971bb0af2635 wandb_project: Gradients-On-Nine wandb_run: your_name wandb_runid: 8d4144fc-9ff0-40f6-938c-971bb0af2635 warmup_steps: 5 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 8b01daf6-a520-4c39-9771-116810237924 This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.4449 | 0.0056 | 200 | 0.7361 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
fifxus/f3376e55-66ff-426c-b6a7-057c949035ba
fifxus
2025-01-31T06:54:45Z
8
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "base_model:adapter:EleutherAI/pythia-1b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T06:47:35Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-1b tags: - axolotl - generated_from_trainer model-index: - name: f3376e55-66ff-426c-b6a7-057c949035ba results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-1b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0344751d9f880319_train_data.json ds_type: json format: custom path: /workspace/input_data/0344751d9f880319_train_data.json type: field_input: phonemes field_instruction: text field_output: text_description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: true hub_model_id: fifxus/f3376e55-66ff-426c-b6a7-057c949035ba hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/0344751d9f880319_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: f2cede56-40e6-4279-be11-96fdf946d3ea wandb_project: Gradients-On-10 wandb_run: your_name wandb_runid: f2cede56-40e6-4279-be11-96fdf946d3ea warmup_steps: 5 weight_decay: 0.01 xformers_attention: null ``` </details><br> # f3376e55-66ff-426c-b6a7-057c949035ba This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.3178 | 0.1427 | 200 | 1.0234 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
osllmai-community/DeepSeek-R1
osllmai-community
2025-01-31T06:53:50Z
31
0
transformers
[ "transformers", "safetensors", "deepseek_v3", "text-generation", "conversational", "custom_code", "license:mit", "autotrain_compatible", "fp8", "region:us" ]
text-generation
2025-01-24T05:25:00Z
--- license: mit library_name: transformers --- **osllm.ai Models Highlights Program** **We believe there's no need to pay a token if you have a GPU on your computer.** Highlighting new and noteworthy models from the community. Join the conversation on Discord. <p align="center"> <a href="https://osllm.ai">Official Website</a> &bull; <a href="https://docs.osllm.ai/index.html">Documentation</a> &bull; <a href="https://discord.gg/2fftQauwDD">Discord</a> </p> <p align="center"> <b>NEW:</b> <a href="https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill">Subscribe to our mailing list</a> for updates and news! </p> Email: [email protected] **Disclaimers** [Osllm.ai](https://osllm.ai/) is not the creator, originator, or owner of any model featured in the Community Model Program. Each Community Model is created and provided by third parties. [Osllm.ai](https://osllm.ai/) does not endorse, support, represent, or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate, inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated it. [Osllm.ai](https://osllm.ai/) may not monitor or control the Community Models and cannot take responsibility for them. [Osllm.ai](https://osllm.ai/) disclaims all warranties or guarantees about the accuracy, reliability, or benefits of the Community Models. Furthermore, [Osllm.ai](https://osllm.ai/) disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted, error-free, virus-free, or that any issues will be corrected. You are solely responsible for any damage resulting from your use of or access to the Community Models, downloading of any Community Model, or use of any other Community Model provided by or through [Osllm.ai](https://osllm.ai/).
lesso09/90bf1f61-a725-4670-8b6b-8337146651f1
lesso09
2025-01-31T06:52:40Z
6
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "base_model:adapter:EleutherAI/pythia-1b", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:47:40Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-1b tags: - axolotl - generated_from_trainer model-index: - name: 90bf1f61-a725-4670-8b6b-8337146651f1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-1b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0344751d9f880319_train_data.json ds_type: json format: custom path: /workspace/input_data/0344751d9f880319_train_data.json type: field_input: phonemes field_instruction: text field_output: text_description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso09/90bf1f61-a725-4670-8b6b-8337146651f1 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/0344751d9f880319_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f2cede56-40e6-4279-be11-96fdf946d3ea wandb_project: new-01-29 wandb_run: your_name wandb_runid: f2cede56-40e6-4279-be11-96fdf946d3ea warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 90bf1f61-a725-4670-8b6b-8337146651f1 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1393 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.2938 | 0.1427 | 200 | 1.1393 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
brixeus/ea7a7b26-d0a3-42b6-95a2-6c61e62978e7
brixeus
2025-01-31T06:51:56Z
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "base_model:adapter:EleutherAI/pythia-1b", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:47:14Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-1b tags: - axolotl - generated_from_trainer model-index: - name: ea7a7b26-d0a3-42b6-95a2-6c61e62978e7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-1b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0344751d9f880319_train_data.json ds_type: json format: custom path: /workspace/input_data/0344751d9f880319_train_data.json type: field_input: phonemes field_instruction: text field_output: text_description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: brixeus/ea7a7b26-d0a3-42b6-95a2-6c61e62978e7 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/0344751d9f880319_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: f2cede56-40e6-4279-be11-96fdf946d3ea wandb_project: Gradients-On-Three wandb_run: your_name wandb_runid: f2cede56-40e6-4279-be11-96fdf946d3ea warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # ea7a7b26-d0a3-42b6-95a2-6c61e62978e7 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0029 | 1 | 3.6901 | | 13.5575 | 0.0257 | 9 | 3.1329 | | 8.2015 | 0.0514 | 18 | 1.8524 | | 5.8432 | 0.0770 | 27 | 1.4258 | | 4.913 | 0.1027 | 36 | 1.2349 | | 4.8178 | 0.1284 | 45 | 1.1407 | | 4.4678 | 0.1541 | 54 | 1.0925 | | 4.2954 | 0.1797 | 63 | 1.0601 | | 4.1314 | 0.2054 | 72 | 1.0354 | | 4.2106 | 0.2311 | 81 | 1.0244 | | 3.9968 | 0.2568 | 90 | 1.0192 | | 3.9037 | 0.2825 | 99 | 1.0180 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
denbeo/b5d8c7f0-b388-40bc-b2b7-140633f893be
denbeo
2025-01-31T06:50:47Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "license:llama3", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T06:21:10Z
--- library_name: peft license: llama3 base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 tags: - axolotl - generated_from_trainer model-index: - name: b5d8c7f0-b388-40bc-b2b7-140633f893be results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 594acf1a1ccb4752_train_data.json ds_type: json format: custom path: /workspace/input_data/594acf1a1ccb4752_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: denbeo/b5d8c7f0-b388-40bc-b2b7-140633f893be hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/594acf1a1ccb4752_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: aabd8aec-07d3-4064-82eb-acdd95e34794 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: aabd8aec-07d3-4064-82eb-acdd95e34794 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # b5d8c7f0-b388-40bc-b2b7-140633f893be This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3374 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3602 | 0.4673 | 200 | 0.3374 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-f16-GGUF
roleplaiapp
2025-01-31T06:49:56Z
14
0
transformers
[ "transformers", "gguf", "deepsauerhuatuoskywork", "f16", "llama", "llama-cpp", "text-generation", "endpoints_compatible", "region:us" ]
text-generation
2025-01-31T06:48:57Z
--- library_name: transformers pipeline_tag: text-generation tags: - deepsauerhuatuoskywork - f16 - gguf - llama - llama-cpp - text-generation --- # roleplaiapp/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-f16-GGUF **Repo:** `roleplaiapp/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-f16-GGUF` **Original Model:** `DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B` **Quantized File:** `DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B.f16.gguf` **Quantization:** `GGUF` **Quantization Method:** `f16` ## Overview This is a GGUF f16 quantized version of DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
nttx/d49a7daf-b02a-4a9f-b257-8e0187b4cbe1
nttx
2025-01-31T06:49:50Z
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "base_model:adapter:EleutherAI/pythia-1b", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:47:10Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-1b tags: - axolotl - generated_from_trainer model-index: - name: d49a7daf-b02a-4a9f-b257-8e0187b4cbe1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-1b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0344751d9f880319_train_data.json ds_type: json format: custom path: /workspace/input_data/0344751d9f880319_train_data.json type: field_input: phonemes field_instruction: text field_output: text_description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/d49a7daf-b02a-4a9f-b257-8e0187b4cbe1 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/0344751d9f880319_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f2cede56-40e6-4279-be11-96fdf946d3ea wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f2cede56-40e6-4279-be11-96fdf946d3ea warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d49a7daf-b02a-4a9f-b257-8e0187b4cbe1 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.7469 | 0.2854 | 200 | 0.9736 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
trenden/2546aa18-db21-4b7a-a7a8-88a643bf74cb
trenden
2025-01-31T06:48:57Z
8
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "base_model:adapter:EleutherAI/pythia-1b", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:48:10Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-1b tags: - axolotl - generated_from_trainer model-index: - name: 2546aa18-db21-4b7a-a7a8-88a643bf74cb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-1b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0344751d9f880319_train_data.json ds_type: json format: custom path: /workspace/input_data/0344751d9f880319_train_data.json type: field_input: phonemes field_instruction: text field_output: text_description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: trenden/2546aa18-db21-4b7a-a7a8-88a643bf74cb hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/0344751d9f880319_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f2cede56-40e6-4279-be11-96fdf946d3ea wandb_project: Birthday-SN56-26-Gradients-On-Demand wandb_run: your_name wandb_runid: f2cede56-40e6-4279-be11-96fdf946d3ea warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 2546aa18-db21-4b7a-a7a8-88a643bf74cb This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2979 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0007 | 1 | 3.7691 | | 13.5698 | 0.0093 | 13 | 2.0543 | | 7.7515 | 0.0186 | 26 | 1.4419 | | 6.1164 | 0.0278 | 39 | 1.2979 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
prxy5604/32c3b3db-88ee-43ae-b6dc-718b03f8ac5e
prxy5604
2025-01-31T06:48:38Z
8
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/mistral-7b-instruct-v0.3", "base_model:adapter:unsloth/mistral-7b-instruct-v0.3", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:19:42Z
--- library_name: peft license: apache-2.0 base_model: unsloth/mistral-7b-instruct-v0.3 tags: - axolotl - generated_from_trainer model-index: - name: 32c3b3db-88ee-43ae-b6dc-718b03f8ac5e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/mistral-7b-instruct-v0.3 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 09742d408b3e40b8_train_data.json ds_type: json format: custom path: /workspace/input_data/09742d408b3e40b8_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5604/32c3b3db-88ee-43ae-b6dc-718b03f8ac5e hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/09742d408b3e40b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4120de73-b539-4260-b3b8-ea8a765a1cc0 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 4120de73-b539-4260-b3b8-ea8a765a1cc0 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 32c3b3db-88ee-43ae-b6dc-718b03f8ac5e This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.3377 | 0.0039 | 1 | 1.3770 | | 0.7555 | 0.1959 | 50 | 0.3204 | | 0.8682 | 0.3918 | 100 | 0.2728 | | 0.751 | 0.5877 | 150 | 0.2457 | | 0.9927 | 0.7835 | 200 | 0.2389 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
daniel40/32d076b3-a056-44a7-a2db-84e1dcb3784e
daniel40
2025-01-31T06:48:33Z
7
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "base_model:adapter:EleutherAI/pythia-1b", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:47:47Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-1b tags: - axolotl - generated_from_trainer model-index: - name: 32d076b3-a056-44a7-a2db-84e1dcb3784e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-1b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0344751d9f880319_train_data.json ds_type: json format: custom path: /workspace/input_data/0344751d9f880319_train_data.json type: field_input: phonemes field_instruction: text field_output: text_description format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: daniel40/32d076b3-a056-44a7-a2db-84e1dcb3784e hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/0344751d9f880319_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f2cede56-40e6-4279-be11-96fdf946d3ea wandb_project: Birthday-SN56-28-Gradients-On-Demand wandb_run: your_name wandb_runid: f2cede56-40e6-4279-be11-96fdf946d3ea warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 32d076b3-a056-44a7-a2db-84e1dcb3784e This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0007 | 1 | 3.7691 | | 14.2561 | 0.0093 | 13 | 2.3598 | | 8.7334 | 0.0186 | 26 | 1.4787 | | 6.3238 | 0.0278 | 39 | 1.3045 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
abaddon182/7fe1be7a-e798-4be9-be49-a3c53fccffec
abaddon182
2025-01-31T06:48:04Z
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-1.7B", "base_model:adapter:unsloth/SmolLM2-1.7B", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:36:34Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-1.7B tags: - axolotl - generated_from_trainer model-index: - name: 7fe1be7a-e798-4be9-be49-a3c53fccffec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM2-1.7B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 46397f8cdcd4e3d9_train_data.json ds_type: json format: custom path: /workspace/input_data/46397f8cdcd4e3d9_train_data.json type: field_instruction: text_1 field_output: text_2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: abaddon182/7fe1be7a-e798-4be9-be49-a3c53fccffec hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/46397f8cdcd4e3d9_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 476c1471-b039-4cbd-bceb-4edfe8ad68f7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 476c1471-b039-4cbd-bceb-4edfe8ad68f7 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 7fe1be7a-e798-4be9-be49-a3c53fccffec This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3739 | 0.0038 | 1 | 1.3400 | | 0.8225 | 0.1881 | 50 | 0.7893 | | 0.7231 | 0.3763 | 100 | 0.7516 | | 0.8433 | 0.5644 | 150 | 0.7263 | | 0.8778 | 0.7526 | 200 | 0.7223 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
clarxus/28e54171-ba50-4c01-aeeb-78dc8eb9961c
clarxus
2025-01-31T06:46:46Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Meta-Llama-3.1-8B", "base_model:adapter:unsloth/Meta-Llama-3.1-8B", "license:llama3.1", "region:us" ]
null
2025-01-31T05:20:18Z
--- library_name: peft license: llama3.1 base_model: unsloth/Meta-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: 28e54171-ba50-4c01-aeeb-78dc8eb9961c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Meta-Llama-3.1-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 192b329300a02d89_train_data.json ds_type: json format: custom path: /workspace/input_data/192b329300a02d89_train_data.json type: field_instruction: premise field_output: hypothesis format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: clarxus/28e54171-ba50-4c01-aeeb-78dc8eb9961c hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/192b329300a02d89_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: d1126d90-ba0a-4b25-b1cb-9536b7243f7e wandb_project: Gradients-On-Seven wandb_run: your_name wandb_runid: d1126d90-ba0a-4b25-b1cb-9536b7243f7e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 28e54171-ba50-4c01-aeeb-78dc8eb9961c This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 2.3082 | | 2.352 | 0.0030 | 9 | 2.0138 | | 0.9935 | 0.0059 | 18 | 0.9759 | | 0.8039 | 0.0089 | 27 | 0.8253 | | 0.8526 | 0.0118 | 36 | 0.7441 | | 0.7287 | 0.0148 | 45 | 0.6996 | | 0.6026 | 0.0177 | 54 | 0.6770 | | 0.5989 | 0.0207 | 63 | 0.6590 | | 0.6283 | 0.0236 | 72 | 0.6498 | | 0.6213 | 0.0266 | 81 | 0.6441 | | 0.6398 | 0.0296 | 90 | 0.6411 | | 0.6699 | 0.0325 | 99 | 0.6403 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
hongngo/3878b55b-df4b-4456-8dcb-2266ff75306f
hongngo
2025-01-31T06:46:04Z
5
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:adapter:berkeley-nest/Starling-LM-7B-alpha", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T06:17:22Z
--- library_name: peft license: apache-2.0 base_model: berkeley-nest/Starling-LM-7B-alpha tags: - axolotl - generated_from_trainer model-index: - name: 3878b55b-df4b-4456-8dcb-2266ff75306f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: berkeley-nest/Starling-LM-7B-alpha bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - dffa8fc58ce66dc6_train_data.json ds_type: json format: custom path: /workspace/input_data/dffa8fc58ce66dc6_train_data.json type: field_instruction: title field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: hongngo/3878b55b-df4b-4456-8dcb-2266ff75306f hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/dffa8fc58ce66dc6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 3878b55b-df4b-4456-8dcb-2266ff75306f This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.9948 | 0.0972 | 200 | 0.9933 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nhunglaaaaaaa/df2defed-f47e-4360-bf9f-0fd29cb5fa2c
nhunglaaaaaaa
2025-01-31T06:45:54Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T05:14:18Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: df2defed-f47e-4360-bf9f-0fd29cb5fa2c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dltjdgh0928/test_instruction bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 445036244439be21_train_data.json ds_type: json format: custom path: /workspace/input_data/445036244439be21_train_data.json type: field_input: new_response field_instruction: prompt field_output: org_response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhunglaaaaaaa/df2defed-f47e-4360-bf9f-0fd29cb5fa2c hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/445036244439be21_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8d4144fc-9ff0-40f6-938c-971bb0af2635 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8d4144fc-9ff0-40f6-938c-971bb0af2635 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # df2defed-f47e-4360-bf9f-0fd29cb5fa2c This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.4188 | 0.0056 | 200 | 0.7487 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
tejas-vaia/ft_llama_3_2_test_31_12_2024_10_04
tejas-vaia
2025-01-31T06:42:34Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-01-31T06:40:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dynapp/lora_model
dynapp
2025-01-31T06:40:44Z
18
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T05:53:08Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** dynapp - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
daniel40/731ae3e3-7fd8-4d2f-bc94-ee06f8c3ba32
daniel40
2025-01-31T06:34:43Z
13
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "license:llama3", "region:us" ]
null
2025-01-31T06:30:17Z
--- library_name: peft license: llama3 base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 tags: - axolotl - generated_from_trainer model-index: - name: 731ae3e3-7fd8-4d2f-bc94-ee06f8c3ba32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 594acf1a1ccb4752_train_data.json ds_type: json format: custom path: /workspace/input_data/594acf1a1ccb4752_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: daniel40/731ae3e3-7fd8-4d2f-bc94-ee06f8c3ba32 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/594acf1a1ccb4752_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: aabd8aec-07d3-4064-82eb-acdd95e34794 wandb_project: Birthday-SN56-31-Gradients-On-Demand wandb_run: your_name wandb_runid: aabd8aec-07d3-4064-82eb-acdd95e34794 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 731ae3e3-7fd8-4d2f-bc94-ee06f8c3ba32 This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0023 | 1 | 0.8498 | | 0.4707 | 0.1168 | 50 | 0.4562 | | 0.397 | 0.2336 | 100 | 0.3965 | | 0.3722 | 0.3505 | 150 | 0.3733 | | 0.3228 | 0.4673 | 200 | 0.3465 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mrhunghd/f1631a0d-ccd2-49c1-8dce-a7d76efe8270
mrhunghd
2025-01-31T06:25:23Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T04:33:16Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: f1631a0d-ccd2-49c1-8dce-a7d76efe8270 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5fb110e3c74c3130_train_data.json ds_type: json format: custom path: /workspace/input_data/5fb110e3c74c3130_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: mrhunghd/f1631a0d-ccd2-49c1-8dce-a7d76efe8270 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/5fb110e3c74c3130_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 5cf40287-99df-483d-bba9-4777509422cc wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 5cf40287-99df-483d-bba9-4777509422cc warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f1631a0d-ccd2-49c1-8dce-a7d76efe8270 This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5074 | 0.0058 | 200 | 0.5542 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
beast33/3dd3b2b3-9f22-4256-a50e-beaed4eb2960
beast33
2025-01-31T06:25:18Z
9
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "base_model:adapter:EleutherAI/pythia-1b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T06:02:41Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-1b tags: - axolotl - generated_from_trainer model-index: - name: 3dd3b2b3-9f22-4256-a50e-beaed4eb2960 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-1b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 977dc84035480475_train_data.json ds_type: json format: custom path: /workspace/input_data/977dc84035480475_train_data.json type: field_input: teasertext field_instruction: title field_output: content format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: beast33/3dd3b2b3-9f22-4256-a50e-beaed4eb2960 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/977dc84035480475_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 85a00e0a-85dc-4dff-9962-251f13377a58 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 85a00e0a-85dc-4dff-9962-251f13377a58 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3dd3b2b3-9f22-4256-a50e-beaed4eb2960 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.7456 | 0.0379 | 200 | 2.3137 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
beercan/fish-classification
beercan
2025-01-31T06:23:56Z
6
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "region:us" ]
image-classification
2025-01-31T06:23:49Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: fish-classification results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.2810077667236328 --- # fish-classification Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### arctic char ![arctic char](images/arctic_char.jpg) #### asp fish ![asp fish](images/asp_fish.jpg) #### atlantic cod ![atlantic cod](images/atlantic_cod.jpg) #### atlantic halibyt ![atlantic halibyt](images/atlantic_halibyt.jpg) #### atlantic herring ![atlantic herring](images/atlantic_herring.jpg) #### atlantic mackerel ![atlantic mackerel](images/atlantic_mackerel.jpg) #### atlantic salmon ![atlantic salmon](images/atlantic_salmon.jpg) #### common bleak fish ![common bleak fish](images/common_bleak_fish.jpg) #### common bream ![common bream](images/common_bream.jpg) #### crucian carp ![crucian carp](images/crucian_carp.jpg) #### cuckoo wrasse fish ![cuckoo wrasse fish](images/cuckoo_wrasse_fish.jpg) #### european plaice ![european plaice](images/european_plaice.jpg) #### grayling fish ![grayling fish](images/grayling_fish.jpg) #### haddock fish ![haddock fish](images/haddock_fish.jpg) #### perch ![perch](images/perch.jpg) #### pike ![pike](images/pike.jpg) #### pollock fish ![pollock fish](images/pollock_fish.jpg) #### rainbow trout ![rainbow trout](images/rainbow_trout.jpg) #### roach fish ![roach fish](images/roach_fish.jpg) #### tench fish ![tench fish](images/tench_fish.jpg) #### trout ![trout](images/trout.jpg) #### white bream ![white bream](images/white_bream.jpg) #### zander fish ![zander fish](images/zander_fish.jpg)
mrferr3t/d48efd16-b1af-4738-ac00-2aeb52f40fc0
mrferr3t
2025-01-31T06:23:40Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "region:us" ]
null
2025-01-31T05:55:08Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: d48efd16-b1af-4738-ac00-2aeb52f40fc0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dltjdgh0928/test_instruction bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 445036244439be21_train_data.json ds_type: json format: custom path: /workspace/input_data/445036244439be21_train_data.json type: field_input: new_response field_instruction: prompt field_output: org_response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: 50 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: mrferr3t/d48efd16-b1af-4738-ac00-2aeb52f40fc0 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0005 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 99 micro_batch_size: 2 mlflow_experiment_name: /tmp/445036244439be21_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 300 saves_per_epoch: 0 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8d4144fc-9ff0-40f6-938c-971bb0af2635 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8d4144fc-9ff0-40f6-938c-971bb0af2635 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d48efd16-b1af-4738-ac00-2aeb52f40fc0 This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8031 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 99 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.4896 | 0.0000 | 1 | 1.1178 | | 2.5694 | 0.0014 | 50 | 0.8031 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
sleepdeprived3/Mistral-Small-24B-Instruct-2501_EXL2_4bpw_H8
sleepdeprived3
2025-01-31T06:22:07Z
12
0
vllm
[ "vllm", "safetensors", "mistral", "text-generation", "transformers", "conversational", "en", "fr", "de", "es", "it", "pt", "zh", "ja", "ru", "ko", "base_model:mistralai/Mistral-Small-24B-Base-2501", "base_model:quantized:mistralai/Mistral-Small-24B-Base-2501", "license:apache-2.0", "text-generation-inference", "4-bit", "exl2", "region:us" ]
text-generation
2025-01-31T05:35:09Z
--- language: - en - fr - de - es - it - pt - zh - ja - ru - ko license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Mistral-Small-24B-Base-2501 extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. tags: - transformers --- # Model Card for Mistral-Small-24B-Instruct-2501 Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501). Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized. Perfect for: - Fast response conversational agents. - Low latency function calling. - Subject matter experts via fine-tuning. - Local inference for hobbyists and organizations handling sensitive data. For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community. This release demonstrates our commitment to open source, serving as a strong base model. Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/). Model developper: Mistral AI Team ## Key Features - **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish. - **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting. - **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities. - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window:** A 32k context window. - **System Prompt:** Maintains strong adherence and support for system prompts. - **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark results ### Human evaluated benchmarks | Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini | |----------|-------------|--------------|---------------|------------| | Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 | | Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 | | Ties | 0.052 | 0.060 | 0.236 | 0.160 | | Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 | | Other is better | 0.156 | 0.172 | 0.296 | 0.312 | **Note**: - We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts. - Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model. - We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid. ### Publicly accesible benchmarks **Reasoning & Knowledge** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 | | gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 | **Math & Coding** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 | | math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 | **Instruction following** | Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 | |------------|---------------|--------------|---------------|---------------|-------------| | mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 | | wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 | | arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 | | ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 | **Note**: - Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance ([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)). - Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13. ### Basic Instruct Template (V7-Tekken) ``` <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST] ``` *`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.* ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth*** ## Usage The model can be used with the following frameworks; - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vLLM) - [`transformers`](https://github.com/huggingface/transformers): See [here](#Transformers) ### vLLM We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt: ``` system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")""" ``` **_Installation_** Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4): ``` pip install --upgrade vllm ``` Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed: ``` pip install --upgrade mistral_common ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice ``` **Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. ```py import requests import json from datetime import datetime, timedelta url = "http://<your-server>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" messages = [ { "role": "system", "content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." }, { "role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French." }, ] data = {"model": model, "messages": messages} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Function calling Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Example</summary> ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-24B-Instruct-2501" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to find the weather for, e.g. 'San Francisco'", }, "state": { "type": "string", "description": "The state abbreviation, e.g. 'CA' for California", }, "unit": { "type": "string", "description": "The unit for temperature", "enum": ["celsius", "fahrenheit"], }, }, "required": ["city", "state", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?", }, ] data = {"model": model, "messages": messages, "tools": tools} response = requests.post(url, headers=headers, data=json.dumps(data)) import ipdb; ipdb.set_trace() print(response.json()["choices"][0]["message"]["tool_calls"]) # [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}] ``` </details> #### Offline ```py from vllm import LLM from vllm.sampling_params import SamplingParams from datetime import datetime, timedelta SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." user_prompt = "Give me 5 non-formal ways to say 'See you later' in French." messages = [ { "role": "system", "content": SYSTEM_PROMPT }, { "role": "user", "content": user_prompt }, ] # note that running this model on GPU requires over 60 GB of GPU RAM llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8) sampling_params = SamplingParams(max_tokens=512, temperature=0.15) outputs = llm.chat(messages, sampling_params=sampling_params) print(outputs[0].outputs[0].text) # Sure, here are five non-formal ways to say "See you later" in French: # # 1. À plus tard # 2. À plus # 3. Salut # 4. À toute # 5. Bisous # # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Transformers If you want to use Hugging Face transformers to generate text, you can do something like this. ```py from transformers import pipeline import torch messages = [ {"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16) chatbot(messages) ``` ### Ollama [Ollama](https://github.com/ollama/ollama) can run this model locally on MacOS, Windows and Linux. ``` ollama run mistral-small ``` 4-bit quantization (aliased to default): ``` ollama run mistral-small:24b-instruct-2501-q4_K_M ``` 8-bit quantization: ``` ollama run mistral-small:24b-instruct-2501-q8_0 ``` FP16: ``` ollama run mistral-small:24b-instruct-2501-fp16 ```
JacksonBrune/ebc339f8-9ebe-45a5-b332-bcb99da7df75
JacksonBrune
2025-01-31T06:20:51Z
5
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:adapter:berkeley-nest/Starling-LM-7B-alpha", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:17:43Z
--- library_name: peft license: apache-2.0 base_model: berkeley-nest/Starling-LM-7B-alpha tags: - axolotl - generated_from_trainer model-index: - name: ebc339f8-9ebe-45a5-b332-bcb99da7df75 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: berkeley-nest/Starling-LM-7B-alpha bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - dffa8fc58ce66dc6_train_data.json ds_type: json format: custom path: /workspace/input_data/dffa8fc58ce66dc6_train_data.json type: field_instruction: title field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: JacksonBrune/ebc339f8-9ebe-45a5-b332-bcb99da7df75 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/dffa8fc58ce66dc6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a wandb_project: birthdya-sn56-18-Gradients-On-Demand wandb_run: your_name wandb_runid: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # ebc339f8-9ebe-45a5-b332-bcb99da7df75 This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0005 | 1 | nan | | 165.2308 | 0.0063 | 13 | nan | | 253.161 | 0.0126 | 26 | nan | | 271.5546 | 0.0190 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
JacksonBrune/ab5da776-ee8d-4412-92c4-ed3184ce6ffb
JacksonBrune
2025-01-31T06:20:43Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:adapter:berkeley-nest/Starling-LM-7B-alpha", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:17:25Z
--- library_name: peft license: apache-2.0 base_model: berkeley-nest/Starling-LM-7B-alpha tags: - axolotl - generated_from_trainer model-index: - name: ab5da776-ee8d-4412-92c4-ed3184ce6ffb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: berkeley-nest/Starling-LM-7B-alpha bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - dffa8fc58ce66dc6_train_data.json ds_type: json format: custom path: /workspace/input_data/dffa8fc58ce66dc6_train_data.json type: field_instruction: title field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: JacksonBrune/ab5da776-ee8d-4412-92c4-ed3184ce6ffb hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/dffa8fc58ce66dc6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a wandb_project: Birthday-SN56-12-Gradients-On-Demand wandb_run: your_name wandb_runid: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # ab5da776-ee8d-4412-92c4-ed3184ce6ffb This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.2493 | 0.0005 | 1 | nan | | 0.0 | 0.0063 | 13 | nan | | 0.0 | 0.0126 | 26 | nan | | 0.0 | 0.0190 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kostiantynk/9019326c-5374-46b6-bddc-776db0fb373b
kostiantynk
2025-01-31T06:20:43Z
5
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:berkeley-nest/Starling-LM-7B-alpha", "base_model:adapter:berkeley-nest/Starling-LM-7B-alpha", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:17:30Z
--- library_name: peft license: apache-2.0 base_model: berkeley-nest/Starling-LM-7B-alpha tags: - axolotl - generated_from_trainer model-index: - name: 9019326c-5374-46b6-bddc-776db0fb373b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: berkeley-nest/Starling-LM-7B-alpha bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - dffa8fc58ce66dc6_train_data.json ds_type: json format: custom path: /workspace/input_data/dffa8fc58ce66dc6_train_data.json type: field_instruction: title field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk/9019326c-5374-46b6-bddc-776db0fb373b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/dffa8fc58ce66dc6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a wandb_project: Birthday-SN56-7-Gradients-On-Demand wandb_run: your_name wandb_runid: 73f2e9d8-c4f5-4163-bde3-27fae5504c6a warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9019326c-5374-46b6-bddc-776db0fb373b This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0005 | 1 | nan | | 163.2988 | 0.0063 | 13 | nan | | 241.4237 | 0.0126 | 26 | nan | | 266.2712 | 0.0190 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
friendshipkim/3b_instruct_distill_30k_h0.45-i0.45-a0.0-d0.0_decode
friendshipkim
2025-01-31T06:19:07Z
52
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-01-29T15:48:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF
mradermacher
2025-01-31T06:13:27Z
187
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:SteelStorage/L3.1-MS-Astoria-70b-v2", "base_model:quantized:SteelStorage/L3.1-MS-Astoria-70b-v2", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-22T13:00:32Z
--- base_model: SteelStorage/L3.1-MS-Astoria-70b-v2 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/SteelStorage/L3.1-MS-Astoria-70b-v2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.1-MS-Astoria-70b-v2-i1-GGUF/resolve/main/L3.1-MS-Astoria-70b-v2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-IQ4_XS-GGUF
roleplaiapp
2025-01-31T06:10:03Z
5
0
transformers
[ "transformers", "gguf", "15b", "IQ4_XS", "distill", "iq4", "llama-cpp", "open", "qwen25", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T06:09:01Z
--- library_name: transformers pipeline_tag: text-generation tags: - 15b - IQ4_XS - distill - gguf - iq4 - llama-cpp - open - qwen25 - text-generation --- # roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-IQ4_XS-GGUF **Repo:** `roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-IQ4_XS-GGUF` **Original Model:** `Qwen2.5-1.5B-Open-R1-Distill` **Quantized File:** `Qwen2.5-1.5B-Open-R1-Distill.IQ4_XS.gguf` **Quantization:** `GGUF` **Quantization Method:** `IQ4_XS` ## Overview This is a GGUF IQ4_XS quantized version of Qwen2.5-1.5B-Open-R1-Distill ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
lesso17/6bb192ef-bdbc-4c97-8fa4-062460c78229
lesso17
2025-01-31T06:07:51Z
13
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Mistral-Nemo-Base-2407", "base_model:adapter:unsloth/Mistral-Nemo-Base-2407", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T05:30:56Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Mistral-Nemo-Base-2407 tags: - axolotl - generated_from_trainer model-index: - name: 6bb192ef-bdbc-4c97-8fa4-062460c78229 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Mistral-Nemo-Base-2407 bf16: auto chat_template: llama3 datasets: - data_files: - e25cb6311706a7c7_train_data.json ds_type: json format: custom path: /workspace/input_data/e25cb6311706a7c7_train_data.json type: field_instruction: prompt_attack field_output: output_vittima format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso17/6bb192ef-bdbc-4c97-8fa4-062460c78229 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/e25cb6311706a7c7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 768f12f5-c6fb-403d-9cec-27135dc3578c wandb_project: new-01-29 wandb_run: your_name wandb_runid: 768f12f5-c6fb-403d-9cec-27135dc3578c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 6bb192ef-bdbc-4c97-8fa4-062460c78229 This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.6015 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ardaspear/0a8ae3e3-ee83-4ff9-9eb4-1d7a02db3ee9
ardaspear
2025-01-31T06:07:14Z
6
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2b-it", "base_model:adapter:unsloth/gemma-2b-it", "license:apache-2.0", "region:us" ]
null
2025-01-31T06:05:01Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-2b-it tags: - axolotl - generated_from_trainer model-index: - name: 0a8ae3e3-ee83-4ff9-9eb4-1d7a02db3ee9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 938e7b961a3fae54_train_data.json ds_type: json format: custom path: /workspace/input_data/938e7b961a3fae54_train_data.json type: field_input: choices field_instruction: full_prompt field_output: example format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: ardaspear/0a8ae3e3-ee83-4ff9-9eb4-1d7a02db3ee9 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/938e7b961a3fae54_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 264a9c6b-5cbc-436b-8c95-a81e899b2353 wandb_project: Gradients-On-Five wandb_run: your_name wandb_runid: 264a9c6b-5cbc-436b-8c95-a81e899b2353 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 0a8ae3e3-ee83-4ff9-9eb4-1d7a02db3ee9 This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0952 | 1 | 1.7894 | | 1.7797 | 0.2857 | 3 | 1.6186 | | 1.3176 | 0.5714 | 6 | 0.5547 | | 0.2744 | 0.8571 | 9 | 0.0006 | | 0.0012 | 1.1667 | 12 | 0.0012 | | 0.0013 | 1.4524 | 15 | 0.0004 | | 0.0005 | 1.7381 | 18 | 0.0002 | | 0.0002 | 2.0476 | 21 | 0.0001 | | 0.0002 | 2.3333 | 24 | 0.0001 | | 0.0001 | 2.6190 | 27 | 0.0001 | | 0.0001 | 2.9048 | 30 | 0.0001 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
thalllsssss/e81f180a-e69c-4e64-b86a-5baf21af7288
thalllsssss
2025-01-31T06:06:22Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T05:53:56Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: e81f180a-e69c-4e64-b86a-5baf21af7288 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 2e383d2714d74a06_train_data.json ds_type: json format: custom path: /workspace/input_data/2e383d2714d74a06_train_data.json type: field_instruction: positive field_output: query format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: thalllsssss/e81f180a-e69c-4e64-b86a-5baf21af7288 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/2e383d2714d74a06_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b60596d9-54ac-49d8-9b0e-043acc629d58 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b60596d9-54ac-49d8-9b0e-043acc629d58 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # e81f180a-e69c-4e64-b86a-5baf21af7288 This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.7502 | 0.0744 | 200 | 2.2984 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
thaffggg/56208b13-9084-40b1-a5d2-7fb18ca40bb5
thaffggg
2025-01-31T06:05:11Z
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T05:51:17Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 56208b13-9084-40b1-a5d2-7fb18ca40bb5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 2e383d2714d74a06_train_data.json ds_type: json format: custom path: /workspace/input_data/2e383d2714d74a06_train_data.json type: field_instruction: positive field_output: query format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: thaffggg/56208b13-9084-40b1-a5d2-7fb18ca40bb5 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/2e383d2714d74a06_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b60596d9-54ac-49d8-9b0e-043acc629d58 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b60596d9-54ac-49d8-9b0e-043acc629d58 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 56208b13-9084-40b1-a5d2-7fb18ca40bb5 This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.7637 | 0.0744 | 200 | 2.3004 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
fifxus/1b2487bd-e9bc-458c-bd4a-5bb3626a4150
fifxus
2025-01-31T06:04:12Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Meta-Llama-3.1-8B", "base_model:adapter:unsloth/Meta-Llama-3.1-8B", "license:llama3.1", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T05:30:44Z
--- library_name: peft license: llama3.1 base_model: unsloth/Meta-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: 1b2487bd-e9bc-458c-bd4a-5bb3626a4150 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Meta-Llama-3.1-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 192b329300a02d89_train_data.json ds_type: json format: custom path: /workspace/input_data/192b329300a02d89_train_data.json type: field_instruction: premise field_output: hypothesis format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: true hub_model_id: fifxus/1b2487bd-e9bc-458c-bd4a-5bb3626a4150 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/192b329300a02d89_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: d1126d90-ba0a-4b25-b1cb-9536b7243f7e wandb_project: Gradients-On-10 wandb_run: your_name wandb_runid: d1126d90-ba0a-4b25-b1cb-9536b7243f7e warmup_steps: 5 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 1b2487bd-e9bc-458c-bd4a-5bb3626a4150 This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2852 | 0.0164 | 200 | 0.6426 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
jabigl2025/jabigl2025
jabigl2025
2025-01-31T06:02:21Z
60
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-01-31T05:46:24Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: jabigl2025 --- # Jabigl2025 <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `jabigl2025` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jabigl2025/jabigl2025', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
ardaspear/fcaf515e-4c6b-4b25-8d38-1e85e7b76be8
ardaspear
2025-01-31T06:02:05Z
9
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Mistral-Nemo-Instruct-2407", "base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407", "license:apache-2.0", "region:us" ]
null
2025-01-31T05:18:35Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Mistral-Nemo-Instruct-2407 tags: - axolotl - generated_from_trainer model-index: - name: fcaf515e-4c6b-4b25-8d38-1e85e7b76be8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Mistral-Nemo-Instruct-2407 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 272aed5fd2352d41_train_data.json ds_type: json format: custom path: /workspace/input_data/272aed5fd2352d41_train_data.json type: field_input: text field_instruction: instruction field_output: summary format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: ardaspear/fcaf515e-4c6b-4b25-8d38-1e85e7b76be8 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/272aed5fd2352d41_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: techspear-hub wandb_mode: online wandb_name: 1919911b-3d63-4d23-a0b1-85362cc587f6 wandb_project: Gradients-On-Five wandb_run: your_name wandb_runid: 1919911b-3d63-4d23-a0b1-85362cc587f6 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # fcaf515e-4c6b-4b25-8d38-1e85e7b76be8 This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7059 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0034 | 1 | 1.0754 | | 4.0505 | 0.0309 | 9 | 0.9271 | | 3.3192 | 0.0619 | 18 | 0.7819 | | 2.9275 | 0.0928 | 27 | 0.7441 | | 2.8036 | 0.1237 | 36 | 0.7271 | | 2.8454 | 0.1546 | 45 | 0.7202 | | 2.765 | 0.1856 | 54 | 0.7139 | | 2.799 | 0.2165 | 63 | 0.7117 | | 2.9671 | 0.2474 | 72 | 0.7080 | | 2.8564 | 0.2784 | 81 | 0.7073 | | 3.0606 | 0.3093 | 90 | 0.7060 | | 2.8253 | 0.3402 | 99 | 0.7059 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
aseratus1/b90c1d4f-3fba-4197-b619-66b3b10ec7b8
aseratus1
2025-01-31T06:00:42Z
15
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Mistral-Nemo-Base-2407", "base_model:adapter:unsloth/Mistral-Nemo-Base-2407", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T05:34:51Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Mistral-Nemo-Base-2407 tags: - axolotl - generated_from_trainer model-index: - name: b90c1d4f-3fba-4197-b619-66b3b10ec7b8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Mistral-Nemo-Base-2407 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e25cb6311706a7c7_train_data.json ds_type: json format: custom path: /workspace/input_data/e25cb6311706a7c7_train_data.json type: field_instruction: prompt_attack field_output: output_vittima format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: aseratus1/b90c1d4f-3fba-4197-b619-66b3b10ec7b8 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/e25cb6311706a7c7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 768f12f5-c6fb-403d-9cec-27135dc3578c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 768f12f5-c6fb-403d-9cec-27135dc3578c warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b90c1d4f-3fba-4197-b619-66b3b10ec7b8 This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1689 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.2429 | 0.6015 | 200 | 1.1689 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
llavallava/qwen2vl7b-instruct-trl-dpo-0_0.1_epochs1
llavallava
2025-01-31T05:59:16Z
29
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2_vl", "image-text-to-text", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-7B-Instruct", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-01-30T02:28:14Z
--- base_model: Qwen/Qwen2-VL-7B-Instruct library_name: transformers model_name: qwen2vl7b-instruct-trl-dpo-0_0.1_epochs1 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for qwen2vl7b-instruct-trl-dpo-0_0.1_epochs1 This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="llavallava/qwen2vl7b-instruct-trl-dpo-0_0.1_epochs1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.13.0 - Transformers: 4.48.1 - Pytorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
minhnguyennnnnn/6eb78e2d-6528-4cfd-9a03-203529b5981e
minhnguyennnnnn
2025-01-31T05:58:06Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T04:33:17Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 6eb78e2d-6528-4cfd-9a03-203529b5981e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5fb110e3c74c3130_train_data.json ds_type: json format: custom path: /workspace/input_data/5fb110e3c74c3130_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: minhnguyennnnnn/6eb78e2d-6528-4cfd-9a03-203529b5981e hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/5fb110e3c74c3130_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 5cf40287-99df-483d-bba9-4777509422cc wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 5cf40287-99df-483d-bba9-4777509422cc warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 6eb78e2d-6528-4cfd-9a03-203529b5981e This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5540 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.505 | 0.0058 | 200 | 0.5540 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
cilooor/6cb62604-3342-41b1-b572-195227013367
cilooor
2025-01-31T05:56:45Z
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-01-31T05:55:47Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 6cb62604-3342-41b1-b572-195227013367 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-0.5B-Instruct bf16: true chat_template: llama3 data_processes: 24 dataset_prepared_path: null datasets: - data_files: - 09bdae8113c1b1e3_train_data.json ds_type: json format: custom path: /workspace/input_data/09bdae8113c1b1e3_train_data.json type: field_instruction: inputs field_output: targets format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 4 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: cilooor/6cb62604-3342-41b1-b572-195227013367 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 7.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.07 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine lr_scheduler_warmup_steps: 50 max_grad_norm: 0.3 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/09bdae8113c1b1e3_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.999 adam_epsilon: 1e-8 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 17333 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer total_train_batch_size: 32 train_batch_size: 8 train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b1e9a00c-aacb-4b8d-8b7b-ef64c7ac8d32 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b1e9a00c-aacb-4b8d-8b7b-ef64c7ac8d32 warmup_steps: 30 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 6cb62604-3342-41b1-b572-195227013367 This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 17333 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.4211 | 1 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-Q3_K_M-GGUF
roleplaiapp
2025-01-31T05:56:33Z
7
0
transformers
[ "transformers", "gguf", "15b", "3-bit", "Q3_K_M", "distill", "llama-cpp", "open", "qwen25", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T05:55:36Z
--- library_name: transformers pipeline_tag: text-generation tags: - 15b - 3-bit - Q3_K_M - distill - gguf - llama-cpp - open - qwen25 - text-generation --- # roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-Q3_K_M-GGUF **Repo:** `roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-Q3_K_M-GGUF` **Original Model:** `Qwen2.5-1.5B-Open-R1-Distill` **Quantized File:** `Qwen2.5-1.5B-Open-R1-Distill.Q3_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_M` ## Overview This is a GGUF Q3_K_M quantized version of Qwen2.5-1.5B-Open-R1-Distill ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
prxy5604/6278db0d-feb1-43d1-9431-002f5a9f9b8b
prxy5604
2025-01-31T05:52:57Z
5
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Capybara-7B-V1", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1", "license:mit", "region:us" ]
null
2025-01-31T05:25:12Z
--- library_name: peft license: mit base_model: NousResearch/Nous-Capybara-7B-V1 tags: - axolotl - generated_from_trainer model-index: - name: 6278db0d-feb1-43d1-9431-002f5a9f9b8b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Capybara-7B-V1 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ad743851b20e49b8_train_data.json ds_type: json format: custom path: /workspace/input_data/ad743851b20e49b8_train_data.json type: field_input: rejected field_instruction: question field_output: chosen format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5604/6278db0d-feb1-43d1-9431-002f5a9f9b8b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/ad743851b20e49b8_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 91f3c582-d815-402c-ab5e-ec71edf00cd7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 91f3c582-d815-402c-ab5e-ec71edf00cd7 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 6278db0d-feb1-43d1-9431-002f5a9f9b8b This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5052 | 0.0070 | 1 | 2.3536 | | 1.05 | 0.3509 | 50 | 1.0941 | | 0.9687 | 0.7018 | 100 | 0.9755 | | 0.8403 | 1.0526 | 150 | 0.9280 | | 0.7331 | 1.4035 | 200 | 0.9081 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso17/bd01b2bd-098d-4bd2-a9a0-3e02061b3382
lesso17
2025-01-31T05:51:35Z
7
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Mistral-Nemo-Instruct-2407", "base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T04:47:03Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Mistral-Nemo-Instruct-2407 tags: - axolotl - generated_from_trainer model-index: - name: bd01b2bd-098d-4bd2-a9a0-3e02061b3382 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Mistral-Nemo-Instruct-2407 bf16: auto chat_template: llama3 datasets: - data_files: - 272aed5fd2352d41_train_data.json ds_type: json format: custom path: /workspace/input_data/272aed5fd2352d41_train_data.json type: field_input: text field_instruction: instruction field_output: summary format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso17/bd01b2bd-098d-4bd2-a9a0-3e02061b3382 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/272aed5fd2352d41_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1919911b-3d63-4d23-a0b1-85362cc587f6 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 1919911b-3d63-4d23-a0b1-85362cc587f6 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # bd01b2bd-098d-4bd2-a9a0-3e02061b3382 This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.1719 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-Q2_K-GGUF
roleplaiapp
2025-01-31T05:47:46Z
6
0
transformers
[ "transformers", "gguf", "15b", "2-bit", "Q2_K", "distill", "llama-cpp", "open", "qwen25", "text-generation", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-01-31T05:47:03Z
--- library_name: transformers pipeline_tag: text-generation tags: - 15b - 2-bit - Q2_K - distill - gguf - llama-cpp - open - qwen25 - text-generation --- # roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-Q2_K-GGUF **Repo:** `roleplaiapp/Qwen2.5-1.5B-Open-R1-Distill-Q2_K-GGUF` **Original Model:** `Qwen2.5-1.5B-Open-R1-Distill` **Quantized File:** `Qwen2.5-1.5B-Open-R1-Distill.Q2_K.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q2_K` ## Overview This is a GGUF Q2_K quantized version of Qwen2.5-1.5B-Open-R1-Distill ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).
narendra960/klomena0.3
narendra960
2025-01-31T05:46:55Z
20
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-31T05:45:56Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** narendra960 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
alchemist69/b25eaa88-bb19-4c27-ab8b-2392aa17843e
alchemist69
2025-01-31T05:46:52Z
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:adapter:unsloth/Llama-3.2-3B-Instruct", "license:llama3.2", "region:us" ]
null
2025-01-31T05:14:35Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: b25eaa88-bb19-4c27-ab8b-2392aa17843e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-3B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e7bd19db21230602_train_data.json ds_type: json format: custom path: /workspace/input_data/e7bd19db21230602_train_data.json type: field_input: '' field_instruction: previous_text field_output: gold_text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: alchemist69/b25eaa88-bb19-4c27-ab8b-2392aa17843e hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/e7bd19db21230602_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 5bab7eb6-24a0-48e7-9528-0f2435909dce wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 5bab7eb6-24a0-48e7-9528-0f2435909dce warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b25eaa88-bb19-4c27-ab8b-2392aa17843e This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1148 | 0.0058 | 1 | 2.0874 | | 1.8801 | 0.2903 | 50 | 1.7841 | | 1.876 | 0.5806 | 100 | 1.7700 | | 1.7818 | 0.8708 | 150 | 1.7340 | | 1.6505 | 1.1611 | 200 | 1.7263 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso17/dc5d8c04-cf51-421a-be2a-ff1ec149020e
lesso17
2025-01-31T05:44:47Z
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:jhflow/mistral7b-lora-multi-turn-v2", "base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2", "8-bit", "bitsandbytes", "region:us" ]
null
2025-01-31T05:01:15Z
--- library_name: peft base_model: jhflow/mistral7b-lora-multi-turn-v2 tags: - axolotl - generated_from_trainer model-index: - name: dc5d8c04-cf51-421a-be2a-ff1ec149020e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: jhflow/mistral7b-lora-multi-turn-v2 bf16: auto chat_template: llama3 datasets: - data_files: - bd759e5c8d2b027f_train_data.json ds_type: json format: custom path: /workspace/input_data/bd759e5c8d2b027f_train_data.json type: field_input: answers field_instruction: topic field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso17/dc5d8c04-cf51-421a-be2a-ff1ec149020e hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/bd759e5c8d2b027f_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3217968f-95e4-42f6-ab2b-878e655e1370 wandb_project: new-01-29 wandb_run: your_name wandb_runid: 3217968f-95e4-42f6-ab2b-878e655e1370 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # dc5d8c04-cf51-421a-be2a-ff1ec149020e This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.5431 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
nectec/Pathumma-llm-vision-2.0.0-preview
nectec
2025-01-31T05:44:40Z
128
0
null
[ "safetensors", "qwen2_vl", "visual-question-answering", "th", "arxiv:2409.12191", "arxiv:2308.12966", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-7B-Instruct", "region:us" ]
visual-question-answering
2025-01-30T14:53:17Z
--- language: - th metrics: - sacrebleu base_model: - Qwen/Qwen2-VL-7B-Instruct pipeline_tag: visual-question-answering --- # Pathumma-llm-vision-2.0.0-preview ## Model Overview Pathumma-llm-vision-2.0.0-preview is a multi-modal language model fine-tuned for Visual Question Answering (VQA) and Image Captioning tasks. It contains 8 billion parameters and leverages both image and text processing to understand and generate multi-modal content. - **Model Name**: Pathumma-llm-vision-2.0.0-preview - **Base Model**: Qwen/Qwen2-VL-7B-Instruct - **Architecture**: Multi-modal LLM (Visual Language Model) - **Parameters**: 7 Billion - **Organization**: NECTEC - **License**: [Specify License] ## Intended Use - **Primary Use Cases**: - Visual Question Answering (VQA) - Image Captioning - **Intended Users**: Developers, researchers, and AI practitioners working on multi-modal tasks. - **Possible Applications**: Educational tools, accessibility applications, interactive visual content generation. ## Model Description Pathumma-llm-vision-2.0.0-preview is designed to perform multi-modal tasks by integrating both visual and textual information. The model is fine-tuned with diverse datasets to improve its ability to understand and generate content that aligns with both image and text inputs. ## Training Data The model was fine-tuned on several datasets: - **Thai Image Caption**: Data sourced from image captioning competitions on Kaggle. - **Small-Thai-Wikipedia**: Articles in Thai from Wikipedia. ### Dataset Size - **Training Dataset Size**: 132,946 examples - **Validation Dataset Size**: - examples ## Training Details - **Hardware Used**: - **HPC Cluster**: Lanta - **Number of Nodes**: 4 Nodes - **GPUs per Node**: 4 GPUs - **Total GPUs Used**: 16 GPUs - **Fine-tuning Duration**: 20 hours, 34 minutes, and 43 seconds (excluding evaluation) ## Evaluation Results | Type | Encoder | Decoder | IPU24-dataset <br>(test) <br>(Sentence SacreBLEU) | |----------------------------------------|------------------------------------|-------------------------------------|-------------------------------| | Pathumma-llm-vision-beta-0.0.0 | siglip-so400m-patch14-384 | Meta-Llama-3.1-8B-Instruct | 13.45412 | | Pathumma-llm-vision-1.0.0 | siglip-so400m-patch14-384 | Meta-Llama-3.1-8B-Instruct | 17.66370 | | Pathumma-llm-vision-2.0.0-preview | Qwen2-VL-7B-Instruct | Qwen2-VL-7B-Instruct | **19.112962** | **\*\*Note**: Other models not target fine-tuned on IPU24-datasets may be less representative of IPU24 performance. ## Required Libraries Before you start, ensure you have the following libraries installed: ``` pip install transformers==4.48.1 accelerate peft bitsandbytes qwen-vl-utils[decord]==0.0.8 ``` ## Usage We provide a [inference tutorial](https://colab.research.google.com/drive/1URMEJr2P_9JK0BvBzFv4NN4824iAf0y4#scrollTo=_S-LoNKcv8ww). To use the model with the Hugging Face `transformers` library: ```python import torch from peft import get_peft_model, LoraConfig from transformers import BitsAndBytesConfig from transformers import ( Qwen2VLForConditionalGeneration, Qwen2VLProcessor, ) ``` ```python MODEL_ID = "nectec/Pathumma-llm-vision-2.0.0-preview" DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") USE_QLORA = True lora_config = LoraConfig( lora_alpha=16, lora_dropout=0.05, r=8, bias="none", target_modules=["q_proj", "v_proj"], task_type="CAUSAL_LM", ) if USE_QLORA: bnb_config = BitsAndBytesConfig( load_in_8bit=True, # load_in_4bit=True, # bnb_4bit_use_double_quant=True, # bnb_4bit_quant_type="nf4", # bnb_4bit_compute_type=torch.bfloat16 ) model = Qwen2VLForConditionalGeneration.from_pretrained( MODEL_ID, device_map="auto", quantization_config=bnb_config if USE_QLORA else None, torch_dtype=torch.bfloat16 ) model = get_peft_model(model, lora_config) model.print_trainable_parameters() MIN_PIXELS = 256 * 28 * 28 MAX_PIXELS = 1280 * 28 * 28 processor = Qwen2VLProcessor.from_pretrained(MODEL_ID, min_pixels=MIN_PIXELS, max_pixels=MAX_PIXELS) def encode_via_processor(image, instruction, question): if isinstance(image, str): local_path = image image = Image.open(local_path) messages = [ { "role": "system", "content": [{"type": "text", "text": instruction}] }, { "role": "user", "content": [ { "type": "image" }, { "type": "text", "text": question } ] }, ] text = processor.apply_chat_template( messages, add_generation_prompt=True, ).strip() def convert_img(image): width, height = image.size factor = processor.image_processor.patch_size * processor.image_processor.merge_size if width < factor: image = image.copy().resize((factor, factor * height // width)) elif height < factor: image = image.copy().resize((factor * width // height, factor)) return image image_inputs = [convert_img(image)] encoding = processor( text=text, images=image_inputs, videos=None, return_tensors="pt", ) ## Remove batch dimension # encoding = {k:v.squeeze(dim=0) for k,v in encoding.items()} encoding = {k: v.to(DEVICE) for k, v in encoding.items()} inputs = encoding return inputs def encode_via_processor_extlib(local_path, instruction, question): img_path = "file://" + local_path messages = [ { "role": "system", "content": [{"type": "text", "text": instruction}] }, { "role": "user", "content": [ { "type": "image", "image": img_path, }, { "type": "text", "text": question } ] }, ] text = processor.apply_chat_template( messages, add_generation_prompt=True, ).strip() image_inputs, video_inputs = process_vision_info(messages) encoding = processor( text=text, images=image_inputs, videos=video_inputs, return_tensors="pt", ) ## Remove batch dimension # encoding = {k:v.squeeze(dim=0) for k,v in encoding.items()} encoding = {k: v.to(DEVICE) for k, v in encoding.items()} inputs = encoding return inputs def inference(inputs): start_time = time.time() model.eval() with torch.inference_mode(): # Generate generated_ids = model.generate( **inputs, max_new_tokens=256, temperature=.1, # repetition_penalty=1.2, # top_k=2, # top_p=1, ) generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) end_time = time.time() ## Get letency_time... latency_time = end_time - start_time answer_prompt = [*map( lambda x: re.sub(r"assistant(:|\n)?", "<||SEP-ASSIST||>", x).split('<||SEP-ASSIST||>')[-1].strip(), generated_texts )] predict_output = generated_texts[0] response = re.sub(r"assistant(:|\n)?", "<||SEP-ASSIST||>", predict_output).split('<||SEP-ASSIST||>')[-1].strip() return predict_output, response, round(latency_time, 3) instruction = "You are a helpful assistant." def response_image(img_path, question, instruction=instruction): image = Image.open(img_path) _, response, latency_time = inference(encode_via_processor(image=image, instruction=instruction, question=question)) print("RESPONSE".center(60, "=")) print(response) print(latency_time, "sec.") print("IMAGE".center(60, "=")) plt.imshow(image) plt.show() # Output processing (depends on task requirements) question = "อธิบายภาพนี้" img_path = "/content/The Most Beautiful Public High School in Every State in America.jpg" response_image(img_path, question) >>> ==========================RESPONSE========================== >>> อาคารสีน้ำตาลขนาดใหญ่ที่มีเสาไฟฟ้าอยู่ด้านหน้าและมีต้นไม้อยู่ด้านข้าง >>> 7.987 sec. >>> ===========================IMAGE============================ >>> <IMAGE_MATPLOTLIB> ``` ## Limitations and Biases - The model may exhibit biases due to the training data, which might not be fully representative of all contexts. - Performance may degrade on unfamiliar images or non-standard question formats. ## Ethical Considerations - The model should not be used to generate misleading information or in ways that violate privacy. - Consider fairness and minimize bias when using the model for language and image processing tasks. ## Citation If you use this model, please cite it as follows: ```bibtex @misc{PathummaVision, author = {Thirawarit Pitiphiphat and NECTEC Team}, title = {nectec/Pathumma-llm-vision-2.0.0-preview}, year = {2025}, url = {https://huggingface.co/nectec/Pathumma-llm-vision-2.0.0-preview} } ``` ```bibtex @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ``` ## **Contributor Contract** **Vision Team** Thirawarit Pitiphiphat ([email protected])<br> Theerasit Issaranon ([email protected]) ## Contact For questions or support, please contact **https://discord.gg/3WJwJjZt7r**. ``` This formatting provides a clean, structured, and readable Markdown layout for these sections. Let me know if further adjustments are needed! ```
mradermacher/dclm-id-1.4b-GGUF
mradermacher
2025-01-31T05:40:52Z
198
0
transformers
[ "transformers", "gguf", "en", "base_model:ThisIsATest/dclm-id-1.4b", "base_model:quantized:ThisIsATest/dclm-id-1.4b", "endpoints_compatible", "region:us" ]
null
2025-01-31T05:20:05Z
--- base_model: ThisIsATest/dclm-id-1.4b language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/ThisIsATest/dclm-id-1.4b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q3_K_S.gguf) | Q3_K_S | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.IQ4_XS.gguf) | IQ4_XS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q3_K_L.gguf) | Q3_K_L | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q4_K_M.gguf) | Q4_K_M | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q5_K_S.gguf) | Q5_K_S | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q6_K.gguf) | Q6_K | 1.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.Q8_0.gguf) | Q8_0 | 1.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/dclm-id-1.4b-GGUF/resolve/main/dclm-id-1.4b.f16.gguf) | f16 | 2.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->