modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 12:28:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF | mradermacher | 2025-01-26T06:03:07Z | 2,439 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:Kunakornjack/DeepSeek-R1-Distill-Llama-8B_synthetic_1",
"base_model:quantized:Kunakornjack/DeepSeek-R1-Distill-Llama-8B_synthetic_1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T05:03:37Z | ---
base_model: Kunakornjack/DeepSeek-R1-Distill-Llama-8B_synthetic_1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kunakornjack/DeepSeek-R1-Distill-Llama-8B_synthetic_1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-8B_synthetic_1-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B_synthetic_1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RUC-AIBOX/STILL-3-1.5B-preview | RUC-AIBOX | 2025-01-26T06:02:00Z | 623 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2411.11694",
"arxiv:2412.09413",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-25T16:59:23Z | ---
library_name: transformers
tags: []
---
# Introduction
We release **STILL-3-1.5B-preview**, a slow-thinking reasoning model achieves 39.33% accuracy on AIME benchmark! We adapt reinforcement learning on 1.5B model and observe the continuous performance improvement as the number of training steps increased. For better reproducing our work and advancing research progress, we open-source our code, model, and data.
Code: https://github.com/RUCAIBox/Slow_Thinking_with_LLMs
# Evaluation
We evaluated the model on four benchmarks: MATH, AIME, OMNI, and LiveAOPS. For MATH and AIME, we employed a sampling decoding setup with a sampling temperature of 0.6 and a top-p sampling probability of 0.95. Each question was sampled 64 times, and the average score was calculated. For OMNI and LiveAOPS (August-November 2024), we randomly sampled a subset of answers as integers to facilitate automated evaluation, and used greedy search decoding for the evaluation. The trained model, STILL-3-1.5B-preview, achieved significant improvement. The accuracy on the AIME task increased from 28.67% to 39.33%, resulting in a relative improvement of 37.18%.
| | MATH | AIME | OMNI | LiveAOPS | Avg. |
| --- | :---: | :---: | :---: | :---: | :---: |
| Backbone | 84.04 | 28.67 | 25.60 | 33.33 | 42.91 |
| STILL-3-1.5B-preview | **85.48** | **39.33** | **33.00** | **39.50** | **49.33** |
# Quick Start
```
from transformers import AutoTokenizer, AutoModelForCausalLM
from vllm import LLM, SamplingParams
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("RUC-AIBOX/STILL-3-1.5B-preview")
model = AutoModelForCausalLM.from_pretrained("RUC-AIBOX/STILL-3-1.5B-preview")
# Input text
question = "Convert the point $(0,3)$ in rectangular coordinates to polar coordinates. Enter your answer in the form $(r,\\theta),$ where $r > 0$ and $0 \\le \\theta < 2 \\pi.$"
input_prompts = tokenizer.apply_chat_template(
[
{"role": "user", "content": question}],
tokenize=False,
add_generation_prompt=True
)
# Params
llm = LLM(model=model_path, tensor_parallel_size=1, dtype='bfloat16')
sampling_params_gs = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=32768, stop=stop_words, seed=42, skip_special_tokens=False)
# Completion
responses = model.generate(input_prompts, sampling_params)
print(responses[0].outputs[0].text)
```
# Reference
Please kindly cite our reports if they are helpful for your research.
```
@article{Slow_Thinking_with_LLMs_3_Preview,
title={STILL-3-1.5B-preview: Enhancing Slow Thinking Abilities of Small Models through Reinforcement Learning
},
author={RUCAIBox STILL Team},
url={https://github.com/RUCAIBox/Slow_Thinking_with_LLMs},
year={2025}
}
```
```
@article{Slow_Thinking_with_LLMs_1,
title={Enhancing LLM Reasoning with Reward-guided Tree Search},
author={Jiang, Jinhao and Chen, Zhipeng and Min, Yingqian and Chen, Jie and Cheng, Xiaoxue and Wang, Jiapeng and Tang, Yiru and Sun, Haoxiang and Deng, Jia and Zhao, Wayne Xin and Liu, Zheng and Yan, Dong and Xie, Jian and Wang, Zhongyuan and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2411.11694},
year={2024}
}
```
```
@article{Slow_Thinking_with_LLMs_2,
title={Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems},
author={Min, Yingqian and Chen, Zhipeng and Jiang, Jinhao and Chen, Jie and Deng, Jia and Hu, Yiwen and Tang, Yiru and Wang, Jiapeng and Cheng, Xiaoxue and Song, Huatong and Zhao, Wayne Xin and Liu, Zheng and Wang, Zhongyuan and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2412.09413},
year={2024}
}
```
|
Futuresony/Future_pics_26-01-2025 | Futuresony | 2025-01-26T06:00:28Z | 6 | 0 | diffusers | [
"diffusers",
"finance",
"text-to-image",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:finetune:deepseek-ai/DeepSeek-R1",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-01-26T05:56:48Z | ---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- bleu
base_model:
- deepseek-ai/DeepSeek-R1
new_version: deepseek-ai/DeepSeek-R1
pipeline_tag: text-to-image
library_name: diffusers
tags:
- finance
--- |
kostiantynk-out/7f8afdb7-3684-4cdd-825f-aa47d4a36962 | kostiantynk-out | 2025-01-26T05:58:15Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | 2025-01-26T05:57:10Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f8afdb7-3684-4cdd-825f-aa47d4a36962
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b355c3ff95258244_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b355c3ff95258244_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/7f8afdb7-3684-4cdd-825f-aa47d4a36962
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b355c3ff95258244_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22487750-366e-41ca-8395-d8629638fd03
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22487750-366e-41ca-8395-d8629638fd03
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7f8afdb7-3684-4cdd-825f-aa47d4a36962
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5738 | 0.0028 | 1 | 1.4206 |
| 1.4077 | 0.0085 | 3 | 1.3855 |
| 1.2158 | 0.0170 | 6 | 0.8057 |
| 0.4077 | 0.0255 | 9 | 0.2322 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/881df212-3b29-4eff-b06b-55e945d1e0f0 | Best000 | 2025-01-26T05:55:04Z | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"license:gemma",
"region:us"
] | null | 2025-01-26T05:28:40Z | ---
library_name: peft
license: gemma
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 881df212-3b29-4eff-b06b-55e945d1e0f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d9ae9af1d1d23889_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d9ae9af1d1d23889_train_data.json
type:
field_instruction: input_persona
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/881df212-3b29-4eff-b06b-55e945d1e0f0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d9ae9af1d1d23889_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d1ddd83d-3254-4f1a-93a9-98ee3250c38a
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d1ddd83d-3254-4f1a-93a9-98ee3250c38a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 881df212-3b29-4eff-b06b-55e945d1e0f0
This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3355 | 0.0001 | 1 | 1.3673 |
| 1.1577 | 0.0002 | 3 | 1.3508 |
| 1.3385 | 0.0003 | 6 | 1.1737 |
| 1.0103 | 0.0005 | 9 | 1.0451 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/64d67e54-f90e-40ea-ac90-e58c0094a5c8 | ClarenceDan | 2025-01-26T05:54:21Z | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"license:gemma",
"region:us"
] | null | 2025-01-26T05:28:14Z | ---
library_name: peft
license: gemma
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 64d67e54-f90e-40ea-ac90-e58c0094a5c8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d9ae9af1d1d23889_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d9ae9af1d1d23889_train_data.json
type:
field_instruction: input_persona
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/64d67e54-f90e-40ea-ac90-e58c0094a5c8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d9ae9af1d1d23889_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d1ddd83d-3254-4f1a-93a9-98ee3250c38a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d1ddd83d-3254-4f1a-93a9-98ee3250c38a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 64d67e54-f90e-40ea-ac90-e58c0094a5c8
This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3355 | 0.0001 | 1 | 1.3673 |
| 1.1574 | 0.0002 | 3 | 1.3502 |
| 1.3364 | 0.0003 | 6 | 1.1701 |
| 1.0098 | 0.0005 | 9 | 1.0442 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5604/6e03b9eb-bc21-4bc9-90c9-1a515278b1a2 | prxy5604 | 2025-01-26T05:50:55Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-26T05:18:34Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6e03b9eb-bc21-4bc9-90c9-1a515278b1a2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e722133f6ff26062_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e722133f6ff26062_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/6e03b9eb-bc21-4bc9-90c9-1a515278b1a2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e722133f6ff26062_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 60c2573c-863d-40d4-92b5-0522184a2c6f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 60c2573c-863d-40d4-92b5-0522184a2c6f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6e03b9eb-bc21-4bc9-90c9-1a515278b1a2
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1641 | 0.0019 | 1 | 3.7553 |
| 1.7007 | 0.0946 | 50 | 1.3936 |
| 1.2217 | 0.1892 | 100 | 1.2340 |
| 1.1767 | 0.2838 | 150 | 1.1350 |
| 1.1497 | 0.3784 | 200 | 1.1049 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Rich-J/subnet29_upload_c00_Jan26_0 | Rich-J | 2025-01-26T05:50:53Z | 125 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T05:46:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
suzall/llama-3.2-3b-linkbox-finetune | suzall | 2025-01-26T05:50:35Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3.2",
"fine-tuned",
"conversational",
"question-answering",
"agentic-ai",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-23T12:08:10Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
tags:
- llama-3.2
- fine-tuned
- conversational
- question-answering
- agentic-ai
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# Model Card for Llama-3.2-3B-Linkbox-Finetune
## Model Details
### Model Description
A fine-tuned version of Meta's Llama 3.2-3B model optimized for contextual understanding and link analysis in conversational AI applications. This model demonstrates enhanced performance in:
- Multi-turn dialogue systems
- Knowledge retrieval and synthesis:cite[4]
- Contextual link recognition and analysis
- Agentic workflow orchestration:cite[7]
**Developed by:** Sujal Tamrakar
**Model type:** Transformer-based language model with Grouped-Query Attention (GQA):cite[4]
**Language(s):** Primarily English, with capabilities in German, French, Italian, Portuguese, Hindi, Spanish, and Thai:cite[4]
**License:** Llama 3.2 Community License ([full terms](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE))
**Finetuned from:** meta-llama/Llama-3.2-3B-Instruct:cite[4]
### Model Sources
- **Repository:** [Your GitHub Repository Link]
- **Base Model:** [Meta Llama 3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B)
- **Demo:** [Link to Gradio/Streamlit Demo]
## Uses
### Direct Use
- Contextual link analysis in documents
- Multi-turn conversational agents
- Knowledge retrieval and synthesis systems
- Agentic workflow automation:cite[7]
### Downstream Use
- Enterprise knowledge management systems
- AI-powered research assistants
- Context-aware content recommendation engines
- Automated documentation analysis tools
### Out-of-Scope Use
- Medical/legal decision making
- Generating malicious content
- High-risk government applications
- Languages beyond supported list without proper safety testing:cite[4]
## Bias, Risks, and Limitations
- May reflect biases in pretraining data
- Limited knowledge cutoff (December 2023):cite[4]
- Potential hallucination in long-form generation
- Performance degradation on highly technical domains
### Recommendations
- Implement content filtering (e.g., Llama Guard 3):cite[7]
- Use constrained decoding techniques
- Monitor for factual accuracy in critical applications
- Conduct safety testing for target deployment languages:cite[4]
## How to Get Started
```bash
from transformers import pipeline
model_id = "suzall/llama-3.2-3b-linkbox-finetune"
pipe = pipeline(
"text-generation",
model=model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
messages = [{
"role": "user",
"content": "Analyze links in this text: [YOUR_TEXT]"
}]
outputs = pipe(messages, max_new_tokens=256)
```
## Training Details
### Training Data
- FineTome-100k dataset (conversational format)13
- in-specific link analysis corpus (10k samples)
- Synthetic data generated using Llama 3.1-8B13
### Training Procedure
- **Architecture:** LoRA fine-tuning with r=3213
- **Optimizer:** AdamW-8bit
- **Learning Rate:** 2e-4 with linear decay
- **Sequence Length:** 2048 tokens
- **Hardware:** NVIDIA A100 (40GB)
- **Training Time:** 8 GPU hours
#### Training Hyperparameters
```bash
TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
num_train_epochs=3,
learning_rate=2e-4,
bf16=True,
lr_scheduler_type="linear"
)
```
## Evaluation
### Benchmark Performance
| Benchmark | Score | Comparison |
|------------------|-------|-----------------|
| IFEval (Strict) | 78.2 | +1.3 vs base |
| LinkAnalysis-API | 89.4 | Custom metric |
| MMLU | 63.7 | -0.6 vs base |
## Environmental Impact
- **Carbon Emissions:** ~0.8 kgCO2eq (estimated)
- **Hardware:** 1×A100-40GB
- **Energy:** 2.5kWh (Renewable-powered)
## Technical Specifications
### Model Architecture
- Transformer-based with GQA5
- 3.21B parameters
- 32-layer decoder
- 4096 hidden dimension
- 128k token context window5
### Quantization Options
| Precision | Memory | Recommended Use |
|-----------|--------|---------------------|
| BF16 | 6.5GB | Full precision |
| FP8 | 3.2GB | Balanced |
| INT4 | 1.75GB | Edge deployment |
## Model Card Contact
- **Maintainer:** Sujal Tamrakar
- **Email:** [email protected] |
chauhoang/d69b4d2b-7228-416f-a445-6797c41fd456 | chauhoang | 2025-01-26T05:45:41Z | 12 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:adapter:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T05:39:12Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d69b4d2b-7228-416f-a445-6797c41fd456
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e79aa413a56fb417_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e79aa413a56fb417_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/d69b4d2b-7228-416f-a445-6797c41fd456
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e79aa413a56fb417_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4692c3b1-0351-4533-948d-ace8c76ceb1f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4692c3b1-0351-4533-948d-ace8c76ceb1f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d69b4d2b-7228-416f-a445-6797c41fd456
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | nan |
| 0.0 | 0.0048 | 10 | nan |
| 0.0 | 0.0097 | 20 | nan |
| 0.0 | 0.0145 | 30 | nan |
| 0.0 | 0.0193 | 40 | nan |
| 0.0 | 0.0242 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nblinh63/b539cd6e-3d91-4d14-9b04-e58017dcde76 | nblinh63 | 2025-01-26T05:44:59Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:33:13Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b539cd6e-3d91-4d14-9b04-e58017dcde76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f251bafddc1c416f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f251bafddc1c416f_train_data.json
type:
field_input: item_cast
field_instruction: item_title
field_output: comment
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/b539cd6e-3d91-4d14-9b04-e58017dcde76
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f251bafddc1c416f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b7c42af7-32e6-4423-bce5-9d6119627078
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b7c42af7-32e6-4423-bce5-9d6119627078
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b539cd6e-3d91-4d14-9b04-e58017dcde76
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.0363 | 0.0565 | 200 | 4.2430 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrHungddddh/4c33ec74-70d3-422a-ab86-a58af09ba89d | mrHungddddh | 2025-01-26T05:44:41Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:33:11Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4c33ec74-70d3-422a-ab86-a58af09ba89d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f251bafddc1c416f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f251bafddc1c416f_train_data.json
type:
field_input: item_cast
field_instruction: item_title
field_output: comment
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/4c33ec74-70d3-422a-ab86-a58af09ba89d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f251bafddc1c416f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b7c42af7-32e6-4423-bce5-9d6119627078
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b7c42af7-32e6-4423-bce5-9d6119627078
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4c33ec74-70d3-422a-ab86-a58af09ba89d
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.0363 | 0.0565 | 200 | 4.2430 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis12/1661a9e5-fc70-4e3d-b425-6214f9268ae2 | aleegis12 | 2025-01-26T05:44:25Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T05:34:57Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1661a9e5-fc70-4e3d-b425-6214f9268ae2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 59ebf80954a6130a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/59ebf80954a6130a_train_data.json
type:
field_input: solution_steps
field_instruction: problem
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/1661a9e5-fc70-4e3d-b425-6214f9268ae2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/59ebf80954a6130a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f210f656-5c7e-4a29-80ca-643c4317c822
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f210f656-5c7e-4a29-80ca-643c4317c822
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1661a9e5-fc70-4e3d-b425-6214f9268ae2
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.4407 | 0.0002 | 1 | 4.5460 |
| 37.5818 | 0.0085 | 50 | 3.7542 |
| 14.5638 | 0.0170 | 100 | 3.2181 |
| 12.6851 | 0.0255 | 150 | 2.1898 |
| 14.1543 | 0.0341 | 200 | 2.2198 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
poooj/XLMHateSpeechClassification | poooj | 2025-01-26T05:43:45Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-26T05:26:44Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: XLMHateSpeechClassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMHateSpeechClassification
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7826
- Accuracy: 0.8319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5749 | 1.0 | 1137 | 0.5185 | 0.7780 |
| 0.4785 | 2.0 | 2274 | 0.5458 | 0.7681 |
| 0.4545 | 3.0 | 3411 | 0.4246 | 0.8154 |
| 0.3938 | 4.0 | 4548 | 0.5763 | 0.8176 |
| 0.3554 | 5.0 | 5685 | 0.6506 | 0.8154 |
| 0.3368 | 6.0 | 6822 | 0.7826 | 0.8319 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
kk-aivio/397027cc-2c26-4a79-b5c2-1532f4d74039 | kk-aivio | 2025-01-26T05:42:52Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-01-26T05:41:25Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 397027cc-2c26-4a79-b5c2-1532f4d74039
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b30f33bbd9cba22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b30f33bbd9cba22_train_data.json
type:
field_input: reasoning
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/397027cc-2c26-4a79-b5c2-1532f4d74039
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b30f33bbd9cba22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66ffa688-b6ab-4800-bb73-500be3c51df8
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66ffa688-b6ab-4800-bb73-500be3c51df8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 397027cc-2c26-4a79-b5c2-1532f4d74039
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0010 | 1 | nan |
| 0.0 | 0.0029 | 3 | nan |
| 0.1493 | 0.0058 | 6 | nan |
| 0.0 | 0.0087 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AMindToThink/gemma-2-2b_RMU_s100_a100_layer7 | AMindToThink | 2025-01-26T05:42:28Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T05:40:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
strangerzonehf/Inkk | strangerzonehf | 2025-01-26T05:40:19Z | 423 | 11 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-01-25T15:44:20Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'Inkk, A black and white portrait of a womans face. The womans head is facing the left side of the frame. Her hair is cut in a bun. Her eyes are wide open. Her eyebrows are black and her lips are painted black. Her mouth is painted white. Her nose is black. She has a black microphone in her mouth. The background is white.'
output:
url: images/1.png
- text: 'Inkk, A black and white drawing of a mans face. The man has a black mustache that is trimmed in black. His eyes are blue and he has black hair. He is wearing a black collar with black stripes on it. He also has earphones in his ears. The background is white.'
output:
url: images/2.png
- text: 'Inkk, A black and white monochromatic portrait of a womans face. The womans head is facing the left side of the frame, her hair cascades over her shoulders. She is wearing a black dress with a white stripe down the center of her neck. Her ear is encased in a silver earring. Her hair is pulled back in a ponytail, adding a pop of color to the scene. The background is a stark white, creating a stark contrast to the womans silhouette.'
output:
url: images/3.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Inkk
license: apache-2.0
---

<Gallery />
# Model description for Inkk
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 19 & 2770 |
| Epoch | 23 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 55
## Best Dimensions & Inference
| **Dimensions** | **Aspect Ratio** | **Recommendation** |
|-----------------|------------------|---------------------------|
| 1280 x 832 | 3:2 | Best |
| 1024 x 1024 | 1:1 | Default |
### Inference Range
- **Recommended Inference Steps:** 30–35
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "strangerzonehf/Inkk"
trigger_word = "Inkk"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Inkk` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/strangerzonehf/Inkk/tree/main) them in the Files & versions tab.
|
nhunglaaaaaaa/f9491ff3-b72e-4a33-aad8-1648e7558d16 | nhunglaaaaaaa | 2025-01-26T05:39:31Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:33:14Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f9491ff3-b72e-4a33-aad8-1648e7558d16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f251bafddc1c416f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f251bafddc1c416f_train_data.json
type:
field_input: item_cast
field_instruction: item_title
field_output: comment
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/f9491ff3-b72e-4a33-aad8-1648e7558d16
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f251bafddc1c416f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b7c42af7-32e6-4423-bce5-9d6119627078
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b7c42af7-32e6-4423-bce5-9d6119627078
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f9491ff3-b72e-4a33-aad8-1648e7558d16
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.0363 | 0.0565 | 200 | 4.2430 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nabeix/lucky_0x01 | nabeix | 2025-01-26T05:38:38Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T05:35:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chauhoang/f299a2ca-913b-46a4-b46b-54d651993a9a | chauhoang | 2025-01-26T05:38:08Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-01-26T03:31:13Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f299a2ca-913b-46a4-b46b-54d651993a9a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c9c324e8cf5586e6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9c324e8cf5586e6_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/f299a2ca-913b-46a4-b46b-54d651993a9a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c9c324e8cf5586e6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d35b96a9-b8d1-49c0-b1a8-167bc6103694
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d35b96a9-b8d1-49c0-b1a8-167bc6103694
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f299a2ca-913b-46a4-b46b-54d651993a9a
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 1.5282 |
| 1.5021 | 0.0003 | 10 | 1.3473 |
| 1.1969 | 0.0006 | 20 | 1.1846 |
| 1.0982 | 0.0008 | 30 | 1.1380 |
| 1.1626 | 0.0011 | 40 | 1.1252 |
| 1.0779 | 0.0014 | 50 | 1.1226 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/746b383b-cead-4efc-9489-de75d155bdb7 | daniel40 | 2025-01-26T05:37:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T05:34:04Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 746b383b-cead-4efc-9489-de75d155bdb7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 59ebf80954a6130a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/59ebf80954a6130a_train_data.json
type:
field_input: solution_steps
field_instruction: problem
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/746b383b-cead-4efc-9489-de75d155bdb7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/59ebf80954a6130a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f210f656-5c7e-4a29-80ca-643c4317c822
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f210f656-5c7e-4a29-80ca-643c4317c822
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 746b383b-cead-4efc-9489-de75d155bdb7
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 15.1112 | 0.0000 | 1 | 3.7280 |
| 18.1683 | 0.0001 | 3 | 3.7261 |
| 15.4085 | 0.0003 | 6 | 3.7123 |
| 13.6085 | 0.0004 | 9 | 3.6598 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ReadyArt/L3.3-Nevoria-R1-70b_EXL2_3.0bpw_H8 | ReadyArt | 2025-01-26T05:36:45Z | 6,227 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:merge:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2025-01-26T05:31:37Z | ---
base_model:
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- SicariusSicariiStuff/Negative_LLAMA_70B
- TheDrummer/Anubis-70B-v1
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
- Sao10K/L3.3-70B-Euryale-v2.3
library_name: transformers
tags:
- mergekit
- merge
---
<!DOCTYPE html>
<style>
ebody {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #FF69B4 0%, #800080 100%);
color: #FFFFFF;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
}
.container {
margin: 20px;
background-color: rgba(28, 14, 36, 0.95);
padding: 20px;
border-radius: 12px;
box-shadow: 0 4px 20px rgba(255, 105, 180, 0.4);
border: 1px solid rgba(255, 105, 180, 0.4);
outline: 1px solid rgba(255, 105, 180, 0.7);
outline-offset: -1px;
position: relative;
backdrop-filter: blur(10px);
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.98);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 2s ease-in-out infinite;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.98);
}
50% {
box-shadow: 0 0 20px rgba(255, 105, 180, 0.98);
}
100% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.98);
}
}
.header h1 {
font-size: 28px;
color: #FF69B4;
margin: 0 0 20px 0;
text-shadow: 0 0 15px rgba(255, 105, 180, 0.8);
letter-spacing: 1px;
}
.update-section {
margin-top: 30px;
}
.update-section h2, h2 {
font-size: 24px;
color: #FF69B4;
text-shadow: 0 0 15px rgba(255, 105, 180, 0.8);
letter-spacing: 0.5px;
}
.update-section p {
font-size: 16px;
line-height: 1.6;
color: #FFE1FF;
}
.info p {
color: #FFE1FF;
line-height: 1.6;
font-size: 16px;
}
.info img {
width: 100%;
border-radius: 10px;
margin-bottom: 15px;
box-shadow: 0 0 30px rgba(255, 105, 180, 0.5);
border: 1px solid rgba(255, 105, 180, 0.4);
outline: 1px solid rgba(255, 105, 180, 0.7);
outline-offset: -1px;
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.info img:hover {
transform: scale(1.01);
box-shadow: 0 0 40px rgba(255, 105, 180, 0.6);
}
a {
color: #00FFEE;
text-decoration: none;
transition: color 0.3s ease;
}
a:hover {
color: #FF1493;
}
.button {
display: inline-block;
background: linear-gradient(45deg, rgba(255, 105, 180, 0.9), rgba(128, 0, 128, 0.9));
color: #FFFFFF;
padding: 12px 24px;
border-radius: 5px;
cursor: pointer;
text-decoration: none;
transition: all 0.3s ease;
border: 1px solid rgba(255, 105, 180, 0.4);
}
.button:hover {
background: linear-gradient(45deg, rgba(255, 105, 180, 1), rgba(128, 0, 128, 1));
box-shadow: 0 0 20px rgba(255, 105, 180, 0.7);
transform: translateY(-2px);
}
pre {
background-color: rgba(28, 14, 36, 0.95);
padding: 15px;
border-radius: 5px;
overflow-x: auto;
border: 1px solid rgba(255, 20, 147, 0.3);
outline: 1px solid rgba(255, 20, 147, 0.6);
outline-offset: -1px;
}
code {
font-family: 'Courier New', monospace;
color: #FFE1FF;
}
.benchmark-container {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 12px;
padding: 20px;
margin: 20px 0;
position: relative;
overflow: hidden;
}
.benchmark-container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 20, 147, 0.98);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 2s ease-in-out infinite;
}
.benchmark-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 15px;
}
.metric-box {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
display: flex;
flex-direction: column;
align-items: center;
text-align: center;
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.metric-box:hover {
transform: translateY(-2px);
box-shadow: 0 4px 15px rgba(255, 20, 147, 0.3);
}
.metric-box .label {
color: #00FFEE;
font-size: 14px;
margin-bottom: 8px;
font-weight: 500;
}
.metric-box .value {
color: #FFE1FF;
font-size: 18px;
font-weight: 600;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.5);
}
.metrics-section {
margin-bottom: 30px;
}
.metrics-section details {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
margin-bottom: 15px;
}
.metrics-section summary {
color: #FF1493;
font-size: 20px;
cursor: pointer;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
outline: none;
padding: 5px 0;
}
.metrics-section summary::-webkit-details-marker {
display: none;
}
.core-metrics-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 15px;
margin-bottom: 20px;
}
.progress-metrics {
display: grid;
gap: 15px;
}
.progress-metric {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
transition: transform 0.3s ease;
}
.progress-metric:hover {
transform: translateY(-2px);
}
.progress-label {
display: flex;
justify-content: space-between;
margin-bottom: 8px;
color: #00FFEE;
font-size: 14px;
}
.progress-value {
color: #FFE1FF;
}
.progress-bar {
width: 100%;
height: 8px;
background: rgba(0, 0, 0, 0.3);
border: 1px solid rgba(255, 20, 147, 0.15);
border-radius: 4px;
position: relative;
margin: 10px 0;
overflow: hidden;
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, #FF69B4 0%, #800080 100%);
border-radius: 4px;
transition: width 1s ease-in-out;
box-shadow: 0 0 15px rgba(255, 105, 180, 0.4);
}
.progress-bar.split {
display: flex;
justify-content: center;
background: rgba(0, 0, 0, 0.3);
border: 1px solid rgba(255, 20, 147, 0.15);
overflow: visible;
}
.progress-fill-left {
height: 100%;
position: absolute;
right: 50%;
background: linear-gradient(90deg, #FF69B4 30%, rgba(255, 105, 180, 0.5) 100%);
border-radius: 4px 0 0 4px;
transition: width 0.3s ease-in-out;
}
.progress-fill-right {
height: 100%;
position: absolute;
left: 50%;
background: linear-gradient(90deg, rgba(128, 0, 128, 0.5) 0%, #800080 70%);
border-radius: 0 4px 4px 0;
transition: width 0.3s ease-in-out;
}
.progress-metric.split .progress-bar::before,
.progress-metric.split .progress-bar::after {
content: '';
position: absolute;
width: 2px;
height: 20px;
background: rgba(255, 255, 255, 0.7);
top: 50%;
transform: translateY(-50%);
z-index: 2;
box-shadow: 0 0 8px rgba(255, 255, 255, 0.5);
}
.progress-metric.split .progress-bar::before {
left: 0;
}
.progress-metric.split .progress-bar::after {
right: 0;
}
.progress-metric.split:hover .progress-fill-left {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-metric.split:hover .progress-fill-right {
box-shadow: 0 0 15px rgba(75, 0, 130, 0.5);
}
.progress-metric.split {
padding: 12px 15px;
}
.progress-metric.split .progress-label {
margin-bottom: 8px;
gap: 12px;
}
.progress-metric.split .progress-label span:first-child,
.progress-metric.split .progress-label span:last-child {
flex: 0 0 80px;
font-size: 14px;
}
.progress-metric.split .progress-value {
font-weight: 600;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
font-size: 14px;
min-width: 60px;
text-align: center;
}
.progress-metric:hover .progress-fill-center {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-label {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 4px;
color: #00FFEE;
font-size: 14px;
}
.progress-metric:not(.split) .progress-label {
gap: 12px;
}
.progress-metric:not(.split) .progress-label span {
flex: 0 0 auto;
}
.progress-metric:not(.split) .progress-value {
color: #FFE1FF;
}
.progress-metric.split .progress-label {
gap: 20px;
}
.progress-metric.split .progress-label span:first-child,
.progress-metric.split .progress-label span:last-child {
flex: 0 0 80px;
}
.progress-metric.split .progress-label span:first-child {
text-align: right;
}
.progress-metric.split .progress-label span:last-child {
text-align: left;
}
.progress-metric.split .progress-value {
color: #FFE1FF;
flex: 0 0 60px;
text-align: center;
}
.progress-metric:hover .progress-fill {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-metric:hover .progress-fill-center {
box-shadow: 0 0 15px rgba(75, 0, 130, 0.5);
}
.info-grid {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
}
.creator-section {
margin: 20px 0;
}
.creator-badge {
display: inline-flex;
align-items: center;
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 10px 15px;
}
.creator-label {
color: #FFE1FF;
font-size: 14px;
margin-right: 8px;
}
.creator-link {
display: flex;
align-items: center;
gap: 5px;
color: #00FFEE;
text-decoration: none;
transition: all 0.3s ease;
}
.creator-name {
font-weight: 600;
}
.creator-arrow {
font-size: 16px;
transition: transform 0.3s ease;
}
.creator-link:hover {
color: #FF1493;
}
.creator-link:hover .creator-arrow {
transform: translateX(3px);
}
.model-info {
margin-top: 30px;
}
.name-legend {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
margin: 20px 0;
}
.name-legend h3 {
color: #FF1493;
font-size: 18px;
margin: 0 0 15px 0;
}
.legend-grid {
display: grid;
gap: 12px;
}
.legend-item {
display: flex;
align-items: baseline;
gap: 10px;
}
.legend-key {
color: #00FFEE;
font-weight: 600;
min-width: 80px;
}
.legend-value {
color: #FFE1FF;
}
.model-description {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
}
.model-description p {
margin: 0 0 15px 0;
line-height: 1.6;
}
.model-description p:last-child {
margin-bottom: 0;
}
.section-container {
margin: 40px 0;
}
.info-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
overflow: hidden;
}
.info-header {
background: rgba(255, 20, 147, 0.1);
padding: 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.info-header h3 {
color: #FF1493;
margin: 0 0 10px 0;
font-size: 20px;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
.model-tags {
display: flex;
gap: 8px;
flex-wrap: wrap;
}
.model-tag {
background: rgba(0, 255, 238, 0.1);
color: #00FFEE;
padding: 4px 8px;
border-radius: 4px;
font-size: 12px;
border: 1px solid rgba(0, 255, 238, 0.2);
}
.model-composition {
padding: 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.model-composition h4 {
color: #FF1493;
margin: 0 0 15px 0;
font-size: 16px;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 10px;
}
.composition-list li {
color: #FFE1FF;
display: flex;
align-items: baseline;
gap: 8px;
}
.model-component {
color: #00FFEE;
font-weight: 500;
min-width: 120px;
}
.template-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
}
.template-item {
display: flex;
align-items: center;
gap: 12px;
}
.template-icon {
width: 24px;
height: 24px;
opacity: 0.8;
}
.template-content {
display: flex;
align-items: baseline;
gap: 8px;
}
.template-link {
color: #00FFEE;
text-decoration: none;
font-weight: 500;
display: flex;
align-items: center;
gap: 5px;
}
.template-author {
color: rgba(255, 225, 255, 0.7);
font-size: 14px;
}
.quantized-container {
display: grid;
gap: 20px;
}
.quantized-section {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
}
.quantized-section h3 {
color: #FF1493;
font-size: 18px;
margin: 0 0 15px 0;
}
.quantized-items {
display: grid;
gap: 12px;
}
.quantized-item {
display: flex;
align-items: baseline;
gap: 10px;
}
.quantized-item .author {
color: rgba(255, 225, 255, 0.7);
min-width: 100px;
}
.multi-links {
display: flex;
align-items: center;
gap: 8px;
}
.separator {
color: rgba(255, 225, 255, 0.5);
}
.config-container {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
overflow: hidden;
}
.config-header {
background: rgba(255, 20, 147, 0.1);
padding: 15px 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.model-name {
color: #FF1493;
font-weight: 600;
}
.config-content {
padding: 20px;
}
.config-item {
display: flex;
flex-direction: column;
gap: 5px;
margin-bottom: 15px;
}
.config-label {
color: #00FFEE;
font-size: 14px;
font-weight: 500;
}
.config-value {
color: #FFE1FF;
font-family: 'Courier New', monospace;
}
.config-models {
margin-top: 20px;
}
.model-list {
list-style: none;
padding: 0;
margin: 10px 0 0 0;
}
.model-list li {
color: #FFE1FF;
font-family: 'Courier New', monospace;
padding: 5px 0;
padding-left: 20px;
position: relative;
}
.model-list li::before {
content: '-';
position: absolute;
left: 0;
color: #00FFEE;
}
.link-arrow {
display: inline-block;
transition: transform 0.3s ease;
}
a:hover .link-arrow {
transform: translateX(3px);
}
.benchmark-notification {
background: rgba(255, 20, 147, 0.15);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
margin-bottom: 20px;
padding: 12px;
animation: glowPulse 2s infinite;
}
.notification-content {
display: flex;
align-items: center;
justify-content: center;
gap: 10px;
text-align: center;
}
.notification-icon {
font-size: 20px;
}
.notification-text {
color: #FFE1FF;
font-size: 16px;
font-weight: 500;
display: flex;
flex-direction: column;
align-items: center;
gap: 5px;
}
.benchmark-link {
color: #00FFEE;
text-decoration: none;
font-size: 14px;
padding: 4px 8px;
border-radius: 4px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 238, 0.3);
}
.benchmark-link:hover {
background: rgba(0, 255, 238, 0.1);
border-color: rgba(0, 255, 238, 0.5);
color: #00FFEE;
text-shadow: 0 0 5px rgba(0, 255, 238, 0.5);
}
@keyframes glowPulse {
0% {
box-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
50% {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
}
.review-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
margin-bottom: 15px;
}
.review-card:last-child {
margin-bottom: 0;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>L3.3-Nevoria-R1-70b</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<link href="styles.css" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="header">
<h1>L3.3-Nevoria-R1-70b</h1>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/_oWpsvCZ-graNKzJBBjGo.jpeg" alt="Model banner">
<div class="creator-section">
<div class="creator-badge">
<span class="creator-label">Created by</span>
<a href="https://huggingface.co/Steelskull" target="_blank" class="creator-link">
<span class="creator-name">SteelSkull</span>
<span class="creator-arrow">→</span>
</a>
</div>
</div>
<div class="model-info">
<h2>Model Information</h2>
<div class="info-card">
<div class="info-header">
<h3>L3.3-Nevoria-R1-70b</h3>
<div class="model-tags">
<span class="model-tag">L3.3 = Llama 3.3</span>
<span class="model-tag">R1 = DeepSeek-R1</span>
<span class="model-tag">70b Parameters</span>
</div>
</div>
<div class="model-composition">
<h4>Model Composition</h4>
<ul class="composition-list">
<li><span class="model-component"><a href="https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1" target="_blank">EVA-LLAMA-0.1</a></span> Storytelling capabilities</li>
<li><span class="model-component"><a href="https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3" target="_blank">EURYALE-v2.3</a></span> Detailed scene descriptions</li>
<li><span class="model-component"><a href="https://huggingface.co/TheDrummer/Anubis-70B-v1" target="_blank">Anubis-v1</a></span> Enhanced prose details</li>
<li><span class="model-component"><a href="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B" target="_blank">Negative_LLAMA</a></span> Reduced positive bias</li>
<li><span class="model-component"><a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B" target="_blank">DeepSeek-R1-Distill-Llama-70B</a></span> Increased Intelligence / Dialog / Awareness</li>
<li><span class="model-component base-model"><a href="https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B" target="_blank">Nemotron-lorablated</a></span> Base model</li>
</ul>
</div>
<div class="model-description">
<p>This model builds upon the original Nevoria foundation, incorporating the Deepseek-R1 reasoning architecture to enhance dialogue interaction and scene comprehension. While maintaining Nevoria's core strengths in storytelling and scene description (derived from EVA, EURYALE, and Anubis), this iteration aims to improve prompt adherence and creative reasoning capabilities. The model also retains the balanced perspective introduced by Negative_LLAMA and Nemotron elements. Also, the model plays the card to almost a fault, It'll pick up on minor issues and attempt to run with them. Users had it call them out for misspelling a word while playing in character. </p>
<p>Note: While Nevoria-R1 represents a significant architectural change, rather than a direct successor to Nevoria, it operates as a distinct model with its own characteristics.</p>
<p>The lorablated model base choice was intentional, creating unique weight interactions similar to the original <a href="https://huggingface.co/Steelskull/L3-MS-Astoria-70b" target="_blank">Astoria model</a> and <a href="https://huggingface.co/Steelskull/L3.1-MS-Astoria-70b-v2" target="_blank">Astoria V2 model</a>. This "weight twisting" effect, achieved by subtracting the lorablated base model during merging, creates an interesting balance in the model's behavior. While unconventional compared to sequential component application, this approach was chosen for its unique response characteristics.</p>
</div>
</div>
<!--<div class="metrics-section">
<details open>
<summary>User Reviews</summary>
<div class="progress-metrics">
<div>
<div class="review-card">
<div>
<span>@Geechan - Discord</span>
</div>
<p>@Steel Have only briefly tested so far, but you really cooked up an amazing merge with this one, and I mean that wholeheartedly. Insane creativity, perfect character adherence and dialogue, loves to slow burn and take its time, minimal sloppy patterns and writing, and such a breath of fresh air in many ways. I'm enjoying my results with 1 temp and 0.99 TFS (close to something like 0.015 min P). Letting the model be creative and wild is so fun and makes me want to RP more.<br><br>No positivity bias either; violent scenes will result in my death and/or suffering, as they should, and I don't see any soft refusals either. ERP has no skimming of details or refusals like you see on some other L3.3 tunes too</p>
</div>
<div class="review-card">
<div>
<span>IGODZOL - Huggingface</span>
</div>
<p>I honestly have no idea why (maybe the negative llama is having that great of an influence) but this merge is miles above the individual tunes that went into making it. Good sir, this model has just become my daily driver. Chapeau bas</p>
</div>
<div class="review-card">
<div>
<span>@thana_alt - Discord</span>
</div>
<p>I'm thoroughly impressed by this merge of Llama 3.3. It successfully addresses the positivity bias prevalent in the base Llama model, ensuring a more accurate and balanced response. The adherence to system prompts is also notable, with the model demonstrating a keen understanding of context and instruction.<br><br>The prose generated by this model is truly exceptional - it's almost as if a skilled chef has carefully crafted each sentence to create a rich and immersive experience. I put this to the test in an adventure scenario, where I had about 10,000 tokens of lorebooks and was managing nine characters simultaneously. Despite the complexity, the model performed flawlessly, keeping track of each character's location and activity without any confusion - even when they were in different locations.<br><br>I also experimented with an astral projection type of power, and was impressed to see that the model accurately discerned that I wasn't physically present in a particular location. Another significant advantage of this model is the lack of impersonation issues, allowing for seamless role-playing and storytelling.<br><br>The capacity of this model is equally impressive, as I was able to load up to 110,000 tokens without encountering any issues. In fact, I successfully tested it with up to 70,000 tokens without experiencing any breakdown or degradation in performance.<br><br>When combined with the "The Inception Presets - Methception Llamaception Qwenception" prompt preset from https://huggingface.co/Konnect1221/ , this model truly shines, bringing out the best in the Llama 3.3 architecture. Overall, I'm extremely satisfied with this merge and would highly recommend it to anyone looking to elevate their storytelling and role-playing experiences.</p>
</div>
</div>
</div>
</details>
</div>-->
</div>
<!-- UGI-Benchmark Results (Temporarily Hidden)
<h2>UGI-Benchmark Results:</h2>
<div class="benchmark-container">
<div class="benchmark-notification">
<div class="notification-content">
<span class="notification-icon">🏆</span>
<span class="notification-text">
Highest ranked 70b as of 01/17/2025.
<a href="https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard" target="_blank" class="benchmark-link">
View Full Leaderboard →
</a>
</span>
</div>
</div>
<div class="metrics-section">
<h3>Core Metrics</h3>
<div class="core-metrics-grid">
<div class="metric-box">
<span class="label">UGI Score</span>
<span class="value">56.75</span>
</div>
<div class="metric-box">
<span class="label">Willingness Score</span>
<span class="value">7.5/10</span>
</div>
<div class="metric-box">
<span class="label">Natural Intelligence</span>
<span class="value">41.09</span>
</div>
<div class="metric-box">
<span class="label">Coding Ability</span>
<span class="value">20</span>
</div>
</div>
</div>
<div class="metrics-section">
<h3>Model Information</h3>
<div class="info-grid">
<div class="metric-box">
<span class="label">Political Lean</span>
<span class="value">-8.1%</span>
</div>
<div class="metric-box">
<span class="label">Ideology</span>
<span class="value">Liberalism</span>
</div>
<div class="metric-box">
<span class="label">Parameters</span>
<span class="value">70B</span>
</div>
</div>
</div>
<div class="metrics-section">
<details>
<summary>Aggregated Scores</summary>
<div class="progress-metrics">
<div class="progress-metric">
<div class="progress-label">
<span>Diplomacy</span>
<span class="progress-value">61.9%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 61.9%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Government</span>
<span class="progress-value">45.9%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 45.9%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Economy</span>
<span class="progress-value">43.9%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 43.9%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Society</span>
<span class="progress-value">60.1%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 60.1%"></div>
</div>
</div>
</div>
</details>
</div>
<div class="metrics-section">
<details>
<summary>Individual Scores</summary>
<div class="progress-metrics">
<div class="progress-metric split">
<div class="progress-label">
<span>Federal</span>
<span class="progress-value">44.2%</span>
<span>Unitary</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 22.1%"></div>
<div class="progress-fill-right" style="width: 27.9%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Democratic</span>
<span class="progress-value">66.2%</span>
<span>Autocratic</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 33.1%"></div>
<div class="progress-fill-right" style="width: 16.9%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Security</span>
<span class="progress-value">48.1%</span>
<span>Freedom</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 24.05%"></div>
<div class="progress-fill-right" style="width: 25.95%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Nationalism</span>
<span class="progress-value">40.4%</span>
<span>Int'l</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 20.2%"></div>
<div class="progress-fill-right" style="width: 29.8%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Militarist</span>
<span class="progress-value">30.4%</span>
<span>Pacifist</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 15.2%"></div>
<div class="progress-fill-right" style="width: 34.8%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Assimilationist</span>
<span class="progress-value">43.3%</span>
<span>Multiculturalist</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 21.65%"></div>
<div class="progress-fill-right" style="width: 28.35%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Collectivize</span>
<span class="progress-value">43.8%</span>
<span>Privatize</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 21.9%"></div>
<div class="progress-fill-right" style="width: 28.1%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Planned</span>
<span class="progress-value">43.1%</span>
<span>LaissezFaire</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 21.55%"></div>
<div class="progress-fill-right" style="width: 28.45%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Isolationism</span>
<span class="progress-value">44.8%</span>
<span>Globalism</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 22.4%"></div>
<div class="progress-fill-right" style="width: 27.6%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Irreligious</span>
<span class="progress-value">55.4%</span>
<span>Religious</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 27.7%"></div>
<div class="progress-fill-right" style="width: 22.3%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Progressive</span>
<span class="progress-value">59.6%</span>
<span>Traditional</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 29.8%"></div>
<div class="progress-fill-right" style="width: 20.2%"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Acceleration</span>
<span class="progress-value">65.2%</span>
<span>Bioconservative</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="width: 32.6%"></div>
<div class="progress-fill-right" style="width: 17.4%"></div>
</div>
</div>
</div>
</details>
</div>
</div>
-->
<!-- Open LLM-Benchmark Results (Temporarily Hidden)
<h2>Open LLM-Benchmark Results:</h2>
<div class="benchmark-container">
<div class="benchmark-notification">
<div class="notification-content">
<span class="notification-text">
Average Score: 43.92%
<a href="https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?rankingMode=dynamic" target="_blank" class="benchmark-link">
View Full Leaderboard →
</a>
</span>
</div>
</div>
<div class="progress-metrics">
<div class="progress-metric">
<div class="progress-label">
<span>IFEval</span>
<span class="progress-value">69.63%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 69.63%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>BBH</span>
<span class="progress-value">56.60%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 56.60%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MATH</span>
<span class="progress-value">38.82%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 38.82%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>GPQA</span>
<span class="progress-value">29.42%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 29.42%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MUSR</span>
<span class="progress-value">18.63%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 18.63%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MMLU-Pro</span>
<span class="progress-value">50.39%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 50.39%"></div>
</div>
</div>
</div>
</div>
-->
<div class="section-container">
<h2>Reccomended Templates & Prompts</h2>
<div class="template-card">
<div class="template-item">
<div class="template-content">
<a href="https://huggingface.co/Konnect1221/Methception-Llamaception-SillyTavern-Preset" target="_blank" class="template-link">
LLam@ception
<span class="link-arrow">→</span>
</a>
<span class="template-author">by @.konnect</span>
</div>
</div>
</div>
</div>
<div class="section-container">
<h2>Quantized Versions</h2>
<div class="quantized-container">
<div class="quantized-section">
<h3>GGUF Quantizations</h3>
<div class="quantized-items">
<!--<div class="quantized-item">
<span class="author">bartowski</span>
<a href="https://huggingface.co/bartowski/L3.3-Exp-Nevoria-R1-70b-GGUF" target="_blank">
Combined-GGUF
<span class="link-arrow">→</span>
</a>
</div>-->
<div class="quantized-item">
<span class="author">mradermacher</span>
<div class="multi-links">
<a href="https://huggingface.co/mradermacher/L3.3-Exp-Nevoria-R1-70b-GGUF" target="_blank">
GGUF
<span class="link-arrow">→</span>
</a>
<span class="separator">//</span>
<a href="https://huggingface.co/mradermacher/L3.3-Exp-Nevoria-R1-70b-i1-GGUF" target="_blank">
Imat-GGUF
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
</div>
<div class="quantized-section">
<h3>EXL2 Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">Darkhn</span>
<a href="https://huggingface.co/Darkhn/Steelskull_L3.3-Exp-Nevoria-R1-70b-6.0bpw-h8-exl2" target="_blank">
6.0BPW-EXL2
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
</div>
</div>
<div class="support-section">
<h2>Support the Project:</h2>
<a href="https://ko-fi.com/Y8Y0AO2XE" target="_blank" class="button">
Support on Ko-fi
</a>
</div>
</div>
</div>
</body>
</html>
|
tarabukinivan/4ec47c67-4358-4f52-a142-1d6f35c3ec00 | tarabukinivan | 2025-01-26T05:35:21Z | 6 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:11:48Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4ec47c67-4358-4f52-a142-1d6f35c3ec00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloomz-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- eb75b6ffdc77ea4d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/eb75b6ffdc77ea4d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/4ec47c67-4358-4f52-a142-1d6f35c3ec00
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/eb75b6ffdc77ea4d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 888b9795-bd3d-4c1e-9289-4c99ad92b728
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 888b9795-bd3d-4c1e-9289-4c99ad92b728
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4ec47c67-4358-4f52-a142-1d6f35c3ec00
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.6177 |
| 11.15 | 0.0004 | 5 | 2.5755 |
| 9.6405 | 0.0008 | 10 | 2.4382 |
| 9.4943 | 0.0012 | 15 | 2.2658 |
| 9.1467 | 0.0016 | 20 | 2.1733 |
| 9.4334 | 0.0020 | 25 | 2.1324 |
| 7.9504 | 0.0024 | 30 | 2.1234 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis10/8ed9fe91-c2b5-42ca-8efb-927b4c8fbf45 | aleegis10 | 2025-01-26T05:33:31Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-26T01:51:05Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8ed9fe91-c2b5-42ca-8efb-927b4c8fbf45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- c9e5168aaf615a7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9e5168aaf615a7c_train_data.json
type:
field_instruction: problem
field_output: target_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis10/8ed9fe91-c2b5-42ca-8efb-927b4c8fbf45
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/c9e5168aaf615a7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c81d855a-5c46-46dc-bab6-9f15fcbfa230
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c81d855a-5c46-46dc-bab6-9f15fcbfa230
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8ed9fe91-c2b5-42ca-8efb-927b4c8fbf45
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.1393 | 0.0001 | 1 | 8.7350 |
| 1.3381 | 0.0027 | 50 | 1.0531 |
| 1.0026 | 0.0053 | 100 | 0.5982 |
| 0.5522 | 0.0080 | 150 | 0.1816 |
| 0.238 | 0.0107 | 200 | 0.0841 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/b3c547c0-65a1-4b34-9ff7-cb605ef4e576 | great0001 | 2025-01-26T05:25:03Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"base_model:adapter:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"license:llama3",
"region:us"
] | null | 2025-01-26T05:21:30Z | ---
library_name: peft
license: llama3
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b3c547c0-65a1-4b34-9ff7-cb605ef4e576
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 90ff401367e42c67_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/90ff401367e42c67_train_data.json
type:
field_instruction: prompt
field_output: y_true
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/b3c547c0-65a1-4b34-9ff7-cb605ef4e576
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/90ff401367e42c67_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: af8ec97d-9490-4745-9a2d-3693291921a2
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: af8ec97d-9490-4745-9a2d-3693291921a2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b3c547c0-65a1-4b34-9ff7-cb605ef4e576
This model is a fine-tuned version of [aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0009 | 3 | nan |
| 0.0 | 0.0017 | 6 | nan |
| 0.0 | 0.0026 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yfarm01/sn29_jan26_c1 | yfarm01 | 2025-01-26T05:24:37Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T05:18:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gavrilstep/0ae222fe-9a6e-42b6-a140-4981d2315b4c | gavrilstep | 2025-01-26T05:23:49Z | 6 | 0 | peft | [
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:22:02Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0ae222fe-9a6e-42b6-a140-4981d2315b4c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 558c519d44160381_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/558c519d44160381_train_data.json
type:
field_instruction: question
field_output: answers
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/0ae222fe-9a6e-42b6-a140-4981d2315b4c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/558c519d44160381_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e2176481-25d5-4e19-9520-315ccb160b4d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e2176481-25d5-4e19-9520-315ccb160b4d
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0ae222fe-9a6e-42b6-a140-4981d2315b4c
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 10.8379 |
| 10.8326 | 0.0006 | 5 | 10.8365 |
| 10.8302 | 0.0012 | 10 | 10.8314 |
| 10.8233 | 0.0018 | 15 | 10.8235 |
| 10.8181 | 0.0024 | 20 | 10.8167 |
| 10.8121 | 0.0030 | 25 | 10.8131 |
| 10.8134 | 0.0036 | 30 | 10.8124 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/2a46d83a-7ce6-4b32-a2d7-d5ebcbbc8ee5 | hongngo | 2025-01-26T05:23:33Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:05:26Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2a46d83a-7ce6-4b32-a2d7-d5ebcbbc8ee5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b30f33bbd9cba22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b30f33bbd9cba22_train_data.json
type:
field_input: reasoning
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/2a46d83a-7ce6-4b32-a2d7-d5ebcbbc8ee5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b30f33bbd9cba22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66ffa688-b6ab-4800-bb73-500be3c51df8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66ffa688-b6ab-4800-bb73-500be3c51df8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2a46d83a-7ce6-4b32-a2d7-d5ebcbbc8ee5
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0128 | 0.1938 | 200 | 0.0292 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/f34575b0-3b2d-40b4-b5ff-6bca3085df2b | mrferr3t | 2025-01-26T05:23:30Z | 6 | 0 | peft | [
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | 2025-01-26T05:22:50Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f34575b0-3b2d-40b4-b5ff-6bca3085df2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 558c519d44160381_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/558c519d44160381_train_data.json
type:
field_instruction: question
field_output: answers
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/f34575b0-3b2d-40b4-b5ff-6bca3085df2b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/558c519d44160381_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e2176481-25d5-4e19-9520-315ccb160b4d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e2176481-25d5-4e19-9520-315ccb160b4d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f34575b0-3b2d-40b4-b5ff-6bca3085df2b
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8278 | 0.0001 | 1 | 10.8396 |
| 10.8379 | 0.0004 | 3 | 10.8395 |
| 10.8364 | 0.0007 | 6 | 10.8386 |
| 10.8355 | 0.0011 | 9 | 10.8372 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thangla01/5e2223e4-ee59-46f7-870f-9b5a963a98cc | thangla01 | 2025-01-26T05:23:14Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:05:21Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5e2223e4-ee59-46f7-870f-9b5a963a98cc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b30f33bbd9cba22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b30f33bbd9cba22_train_data.json
type:
field_input: reasoning
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/5e2223e4-ee59-46f7-870f-9b5a963a98cc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b30f33bbd9cba22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66ffa688-b6ab-4800-bb73-500be3c51df8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66ffa688-b6ab-4800-bb73-500be3c51df8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5e2223e4-ee59-46f7-870f-9b5a963a98cc
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0129 | 0.1938 | 200 | 0.0294 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
poooj/MuRILHateSpeechClassification | poooj | 2025-01-26T05:22:13Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google/muril-base-cased",
"base_model:finetune:google/muril-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-26T05:05:09Z | ---
library_name: transformers
license: apache-2.0
base_model: google/muril-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MuRILHateSpeechClassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MuRILHateSpeechClassification
This model is a fine-tuned version of [google/muril-base-cased](https://huggingface.co/google/muril-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7371
- Accuracy: 0.8407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5519 | 1.0 | 1137 | 0.4704 | 0.8110 |
| 0.4345 | 2.0 | 2274 | 0.4862 | 0.8198 |
| 0.3547 | 3.0 | 3411 | 0.4660 | 0.8473 |
| 0.2919 | 4.0 | 4548 | 0.6066 | 0.8440 |
| 0.2205 | 5.0 | 5685 | 0.6805 | 0.8429 |
| 0.1759 | 6.0 | 6822 | 0.7371 | 0.8407 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
myhaaaaaaa/8e996502-1c66-4215-9c96-d9251f52de11 | myhaaaaaaa | 2025-01-26T05:21:47Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:05:22Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8e996502-1c66-4215-9c96-d9251f52de11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b30f33bbd9cba22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b30f33bbd9cba22_train_data.json
type:
field_input: reasoning
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: myhaaaaaaa/8e996502-1c66-4215-9c96-d9251f52de11
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b30f33bbd9cba22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66ffa688-b6ab-4800-bb73-500be3c51df8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66ffa688-b6ab-4800-bb73-500be3c51df8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8e996502-1c66-4215-9c96-d9251f52de11
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0125 | 0.1938 | 200 | 0.0293 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Theros/L3-ColdBrew-R1-test1 | Theros | 2025-01-26T05:21:08Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Theros/L3-ColdBrew-Daybreak",
"base_model:merge:Theros/L3-ColdBrew-Daybreak",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T05:18:06Z | ---
base_model:
- Theros/L3-ColdBrew-Daybreak
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Theros/L3-ColdBrew-Daybreak](https://huggingface.co/Theros/L3-ColdBrew-Daybreak) as a base.
### Models Merged
The following models were included in the merge:
* [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Theros/L3-ColdBrew-Daybreak
parameters:
density: 0.4
weight: 0.4
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
parameters:
density: 0.6
weight: 0.6
merge_method: dare_ties
base_model: Theros/L3-ColdBrew-Daybreak
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
|
athul8129/Llama_tuned_bot | athul8129 | 2025-01-26T05:20:48Z | 26 | 0 | null | [
"pytorch",
"llama",
"unsloth",
"trl",
"sft",
"license:mit",
"region:us"
] | null | 2025-01-26T05:15:37Z | ---
license: mit
tags:
- unsloth
- trl
- sft
---
|
trenden/5e31c4dc-0485-421d-b4d2-f3b56c011a3c | trenden | 2025-01-26T05:20:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | 2025-01-26T05:19:26Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5e31c4dc-0485-421d-b4d2-f3b56c011a3c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b355c3ff95258244_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b355c3ff95258244_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/5e31c4dc-0485-421d-b4d2-f3b56c011a3c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b355c3ff95258244_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22487750-366e-41ca-8395-d8629638fd03
wandb_project: Birthday-SN56-3-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22487750-366e-41ca-8395-d8629638fd03
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5e31c4dc-0485-421d-b4d2-f3b56c011a3c
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5738 | 0.0028 | 1 | 1.4206 |
| 1.4099 | 0.0085 | 3 | 1.3866 |
| 1.209 | 0.0170 | 6 | 0.7931 |
| 0.3992 | 0.0255 | 9 | 0.2272 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B-Q5_K_M-GGUF | jaspionjader | 2025-01-26T05:13:05Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B",
"base_model:quantized:jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T03:37:17Z | ---
base_model: jaspionjader/sof-15
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# jaspionjader/sof-15-Q5_K_M-GGUF
This model was converted to GGUF format from [`jaspionjader/sof-15`](https://huggingface.co/jaspionjader/sof-15) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jaspionjader/sof-15) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jaspionjader/sof-15-Q5_K_M-GGUF --hf-file sof-15-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jaspionjader/sof-15-Q5_K_M-GGUF --hf-file sof-15-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jaspionjader/sof-15-Q5_K_M-GGUF --hf-file sof-15-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jaspionjader/sof-15-Q5_K_M-GGUF --hf-file sof-15-q5_k_m.gguf -c 2048
```
|
visdata/po9 | visdata | 2025-01-26T05:12:29Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T05:06:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Distilled-Whiskey-8b-i1-GGUF | mradermacher | 2025-01-26T05:11:43Z | 548 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Triangle104/Distilled-Whiskey-8b",
"base_model:quantized:Triangle104/Distilled-Whiskey-8b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-26T04:23:50Z | ---
base_model: Triangle104/Distilled-Whiskey-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Triangle104/Distilled-Whiskey-8b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF/resolve/main/Distilled-Whiskey-8b.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlx-community/Bio-Medical-Llama-3-8B | mlx-community | 2025-01-26T05:11:40Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"medical",
"Healthcare & Lifesciences",
"BioMed",
"mlx",
"conversational",
"dataset:collaiborateorg/BioMedData",
"base_model:ContactDoctor/Bio-Medical-Llama-3-8B",
"base_model:quantized:ContactDoctor/Bio-Medical-Llama-3-8B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-01-22T23:59:42Z | ---
base_model: ContactDoctor/Bio-Medical-Llama-3-8B
datasets:
- collaiborateorg/BioMedData
library_name: transformers
license: other
tags:
- generated_from_trainer
- medical
- Healthcare & Lifesciences
- BioMed
- mlx
thumbnail: https://collaiborate.com/logo/logo-blue-bg-1.png
model-index:
- name: Bio-Medical-Llama-3-8B
results: []
---
# mlx-community/Bio-Medical-Llama-3-8B
The Model [mlx-community/Bio-Medical-Llama-3-8B](https://huggingface.co/mlx-community/Bio-Medical-Llama-3-8B) was
converted to MLX format from [ContactDoctor/Bio-Medical-Llama-3-8B](https://huggingface.co/ContactDoctor/Bio-Medical-Llama-3-8B)
using mlx-lm version **0.20.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Bio-Medical-Llama-3-8B")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
prxy5604/7445cb16-0eef-48b2-af32-ab3c72b852f7 | prxy5604 | 2025-01-26T05:11:09Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-01-26T03:11:59Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7445cb16-0eef-48b2-af32-ab3c72b852f7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- c9c324e8cf5586e6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9c324e8cf5586e6_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/7445cb16-0eef-48b2-af32-ab3c72b852f7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/c9c324e8cf5586e6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d35b96a9-b8d1-49c0-b1a8-167bc6103694
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d35b96a9-b8d1-49c0-b1a8-167bc6103694
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7445cb16-0eef-48b2-af32-ab3c72b852f7
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2163 | 0.0001 | 1 | 1.5556 |
| 1.3118 | 0.0056 | 50 | 1.1196 |
| 1.5438 | 0.0112 | 100 | 1.0590 |
| 1.2789 | 0.0168 | 150 | 1.0345 |
| 1.0569 | 0.0224 | 200 | 1.0289 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso05/768d9d37-47e2-4a24-a3a6-855337d44150 | lesso05 | 2025-01-26T05:10:09Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T05:05:29Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 768d9d37-47e2-4a24-a3a6-855337d44150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
datasets:
- data_files:
- 6b30f33bbd9cba22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b30f33bbd9cba22_train_data.json
type:
field_input: reasoning
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/768d9d37-47e2-4a24-a3a6-855337d44150
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b30f33bbd9cba22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66ffa688-b6ab-4800-bb73-500be3c51df8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66ffa688-b6ab-4800-bb73-500be3c51df8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 768d9d37-47e2-4a24-a3a6-855337d44150
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0010 | 1 | nan |
| 0.0 | 0.0048 | 5 | nan |
| 0.0 | 0.0097 | 10 | nan |
| 0.0 | 0.0145 | 15 | nan |
| 0.0 | 0.0194 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Distilled-Whiskey-8b-GGUF | mradermacher | 2025-01-26T05:09:58Z | 301 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Triangle104/Distilled-Whiskey-8b",
"base_model:quantized:Triangle104/Distilled-Whiskey-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T04:07:00Z | ---
base_model: Triangle104/Distilled-Whiskey-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/Triangle104/Distilled-Whiskey-8b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Distilled-Whiskey-8b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Distilled-Whiskey-8b-GGUF/resolve/main/Distilled-Whiskey-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
visdata/po6 | visdata | 2025-01-26T05:09:32Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T05:04:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prxy5606/b8238061-fca5-4e80-a7d0-9005e716688e | prxy5606 | 2025-01-26T05:07:59Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T04:34:54Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b8238061-fca5-4e80-a7d0-9005e716688e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 4ff4d8b7c7e542b6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ff4d8b7c7e542b6_train_data.json
type:
field_input: code
field_instruction: func_name
field_output: docstring
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5606/b8238061-fca5-4e80-a7d0-9005e716688e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/4ff4d8b7c7e542b6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8e87b1be-d0e2-427a-97a7-6e294f6c6fe8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8e87b1be-d0e2-427a-97a7-6e294f6c6fe8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b8238061-fca5-4e80-a7d0-9005e716688e
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4319 | 0.0002 | 1 | 3.3060 |
| 1.9963 | 0.0093 | 50 | 1.9159 |
| 2.1124 | 0.0186 | 100 | 1.8209 |
| 1.836 | 0.0279 | 150 | 1.7921 |
| 2.1592 | 0.0372 | 200 | 1.7811 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5607/1bb2b647-c9b9-4ccd-a675-158f262baa9c | prxy5607 | 2025-01-26T05:07:55Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T04:59:01Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1bb2b647-c9b9-4ccd-a675-158f262baa9c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- abec17e0767b2ba3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/abec17e0767b2ba3_train_data.json
type:
field_input: genres
field_instruction: primaryTitle
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5607/1bb2b647-c9b9-4ccd-a675-158f262baa9c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/abec17e0767b2ba3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e8a0da13-6a73-438a-9ee7-ae87453c2808
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e8a0da13-6a73-438a-9ee7-ae87453c2808
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1bb2b647-c9b9-4ccd-a675-158f262baa9c
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.3385 | 0.0007 | 1 | 3.4383 |
| 3.5595 | 0.0358 | 50 | 3.3303 |
| 3.1405 | 0.0715 | 100 | 3.2892 |
| 3.3263 | 0.1073 | 150 | 3.2808 |
| 3.3329 | 0.1430 | 200 | 3.2795 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
infogep/839e454d-d556-4ea2-9e4b-9c6b440761dd | infogep | 2025-01-26T05:07:34Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-01-26T05:05:07Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 839e454d-d556-4ea2-9e4b-9c6b440761dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b30f33bbd9cba22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b30f33bbd9cba22_train_data.json
type:
field_input: reasoning
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: infogep/839e454d-d556-4ea2-9e4b-9c6b440761dd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/6b30f33bbd9cba22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66ffa688-b6ab-4800-bb73-500be3c51df8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66ffa688-b6ab-4800-bb73-500be3c51df8
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 839e454d-d556-4ea2-9e4b-9c6b440761dd
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0019 | 1 | nan |
| 0.0 | 0.0097 | 5 | nan |
| 0.0 | 0.0194 | 10 | nan |
| 0.0 | 0.0291 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlx-community/medllama3-v20 | mlx-community | 2025-01-26T05:06:33Z | 30 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"base_model:ProbeMedicalYonseiMAILab/medllama3-v20",
"base_model:quantized:ProbeMedicalYonseiMAILab/medllama3-v20",
"license:llama3",
"4-bit",
"region:us"
] | null | 2025-01-26T05:04:15Z | ---
base_model: ProbeMedicalYonseiMAILab/medllama3-v20
license: llama3
tags:
- mlx
---
# mlx-community/medllama3-v20
The Model [mlx-community/medllama3-v20](https://huggingface.co/mlx-community/medllama3-v20) was
converted to MLX format from [ProbeMedicalYonseiMAILab/medllama3-v20](https://huggingface.co/ProbeMedicalYonseiMAILab/medllama3-v20)
using mlx-lm version **0.20.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/medllama3-v20")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
jayzhang-ethz/llama3_do_math_0.0001_1ep_div_ | jayzhang-ethz | 2025-01-26T05:06:04Z | 85 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-01-26T04:20:20Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
daniel40/96b6181a-3403-411d-886d-39e77384a95d | daniel40 | 2025-01-26T05:02:33Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:adapter:EleutherAI/pythia-14m",
"region:us"
] | null | 2025-01-26T05:01:59Z | ---
library_name: peft
base_model: EleutherAI/pythia-14m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 96b6181a-3403-411d-886d-39e77384a95d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-14m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9df55d096499ae00_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9df55d096499ae00_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/96b6181a-3403-411d-886d-39e77384a95d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/9df55d096499ae00_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3ac8ec52-5b08-47bc-a9ec-91f11053a811
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3ac8ec52-5b08-47bc-a9ec-91f11053a811
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 96b6181a-3403-411d-886d-39e77384a95d
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 26.4325 | 0.0005 | 1 | 6.4181 |
| 26.0189 | 0.0016 | 3 | 6.4223 |
| 24.2635 | 0.0033 | 6 | 6.4057 |
| 25.5064 | 0.0049 | 9 | 6.3468 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
friendshipkim/testmodel | friendshipkim | 2025-01-26T04:59:28Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T04:57:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lhong4759/bf95ba16-3853-4b01-bce2-e113293d58a2 | lhong4759 | 2025-01-26T04:57:45Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T04:31:53Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bf95ba16-3853-4b01-bce2-e113293d58a2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b355c3ff95258244_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b355c3ff95258244_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/bf95ba16-3853-4b01-bce2-e113293d58a2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b355c3ff95258244_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22487750-366e-41ca-8395-d8629638fd03
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22487750-366e-41ca-8395-d8629638fd03
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bf95ba16-3853-4b01-bce2-e113293d58a2
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0187 | 0.5674 | 200 | 0.0202 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk-out/1e690ab9-6b44-4587-8020-db2c7fabdc23 | kostiantynk-out | 2025-01-26T04:57:37Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-01-26T04:57:04Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1e690ab9-6b44-4587-8020-db2c7fabdc23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c8cb1cf973f48c60_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c8cb1cf973f48c60_train_data.json
type:
field_input: level
field_instruction: prompt
field_output: responses
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/1e690ab9-6b44-4587-8020-db2c7fabdc23
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c8cb1cf973f48c60_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5056acc1-9e2a-46db-a073-66537fc15f92
wandb_project: Mine-SN56-1-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5056acc1-9e2a-46db-a073-66537fc15f92
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1e690ab9-6b44-4587-8020-db2c7fabdc23
This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0024 | 1 | nan |
| 0.0 | 0.0071 | 3 | nan |
| 0.0 | 0.0141 | 6 | nan |
| 0.0 | 0.0212 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/32689075-f8c3-4b50-b51d-641ad1e1842c | great0001 | 2025-01-26T04:54:28Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T04:51:53Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 32689075-f8c3-4b50-b51d-641ad1e1842c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c5411a32936636d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c5411a32936636d5_train_data.json
type:
field_input: func_name
field_instruction: description
field_output: func_code
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/32689075-f8c3-4b50-b51d-641ad1e1842c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c5411a32936636d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aef9c69f-30d8-432c-81b2-675b53905191
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: aef9c69f-30d8-432c-81b2-675b53905191
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 32689075-f8c3-4b50-b51d-641ad1e1842c
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0012 | 3 | nan |
| 0.0 | 0.0023 | 6 | nan |
| 0.0 | 0.0035 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
smallsuper/xlm-roberta-base-finetuned-panx-it | smallsuper | 2025-01-26T04:53:12Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-20T21:29:24Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- type: f1
value: 0.8219402374130168
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2564
- F1: 0.8219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8123 | 1.0 | 70 | 0.3267 | 0.7418 |
| 0.2832 | 2.0 | 140 | 0.2694 | 0.8006 |
| 0.1766 | 3.0 | 210 | 0.2564 | 0.8219 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
|
smallsuper/distilbert-base-uncased-finetuned-clinc | smallsuper | 2025-01-26T04:52:45Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-21T04:02:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.9183870967741935
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 |
| 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 |
| 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
|
smallsuper/xlm-roberta-base-finetuned-panx-en | smallsuper | 2025-01-26T04:52:35Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-20T21:32:13Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- type: f1
value: 0.6911519198664441
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3938
- F1: 0.6912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1333 | 1.0 | 50 | 0.5849 | 0.4568 |
| 0.5109 | 2.0 | 100 | 0.4149 | 0.6608 |
| 0.3668 | 3.0 | 150 | 0.3938 | 0.6912 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
|
smallsuper/xlm-roberta-base-finetuned-panx-all | smallsuper | 2025-01-26T04:52:24Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-20T21:17:39Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1615
- F1: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2813 | 1.0 | 715 | 0.1681 | 0.8193 |
| 0.1329 | 2.0 | 1430 | 0.1598 | 0.8414 |
| 0.0827 | 3.0 | 2145 | 0.1615 | 0.8551 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tarabukinivan/56e86a71-1626-4b55-9e18-29372de0a846 | tarabukinivan | 2025-01-26T04:52:05Z | 14 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T04:04:30Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 56e86a71-1626-4b55-9e18-29372de0a846
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f917fe63bdf5741c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f917fe63bdf5741c_train_data.json
type:
field_instruction: question
field_output: best
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/56e86a71-1626-4b55-9e18-29372de0a846
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/f917fe63bdf5741c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdcbe453-15c2-4ee1-adaf-113620c220d4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdcbe453-15c2-4ee1-adaf-113620c220d4
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 56e86a71-1626-4b55-9e18-29372de0a846
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0012 | 15 | nan |
| 0.0 | 0.0016 | 20 | nan |
| 0.0 | 0.0021 | 25 | nan |
| 0.0 | 0.0025 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
laquythang/de061953-efa6-407c-95d0-cc6586c26730 | laquythang | 2025-01-26T04:49:19Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T04:32:00Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: de061953-efa6-407c-95d0-cc6586c26730
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b355c3ff95258244_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b355c3ff95258244_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/de061953-efa6-407c-95d0-cc6586c26730
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b355c3ff95258244_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22487750-366e-41ca-8395-d8629638fd03
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22487750-366e-41ca-8395-d8629638fd03
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# de061953-efa6-407c-95d0-cc6586c26730
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0193 | 0.5674 | 200 | 0.0196 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
datlaaaaaaa/3a9a5efc-8c8e-4eea-b67d-1e088fafcf9c | datlaaaaaaa | 2025-01-26T04:48:46Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T04:31:51Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3a9a5efc-8c8e-4eea-b67d-1e088fafcf9c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b355c3ff95258244_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b355c3ff95258244_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/3a9a5efc-8c8e-4eea-b67d-1e088fafcf9c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b355c3ff95258244_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22487750-366e-41ca-8395-d8629638fd03
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22487750-366e-41ca-8395-d8629638fd03
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3a9a5efc-8c8e-4eea-b67d-1e088fafcf9c
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0202 | 0.5674 | 200 | 0.0207 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/MicroThinker-8B-Preview-GGUF | mradermacher | 2025-01-26T04:48:15Z | 325 | 0 | transformers | [
"transformers",
"gguf",
"llama3.1",
"en",
"dataset:huihui-ai/FineQwQ-142k",
"base_model:huihui-ai/MicroThinker-8B-Preview",
"base_model:quantized:huihui-ai/MicroThinker-8B-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T03:48:02Z | ---
base_model: huihui-ai/MicroThinker-8B-Preview
datasets:
- huihui-ai/FineQwQ-142k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama3.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/huihui-ai/MicroThinker-8B-Preview
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MicroThinker-8B-Preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MicroThinker-8B-Preview-GGUF/resolve/main/MicroThinker-8B-Preview.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JacksonBrune/cac13152-9e24-44f0-9c67-7e94db50c136 | JacksonBrune | 2025-01-26T04:48:14Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | 2025-01-26T04:45:59Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cac13152-9e24-44f0-9c67-7e94db50c136
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b355c3ff95258244_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b355c3ff95258244_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/cac13152-9e24-44f0-9c67-7e94db50c136
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b355c3ff95258244_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22487750-366e-41ca-8395-d8629638fd03
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22487750-366e-41ca-8395-d8629638fd03
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cac13152-9e24-44f0-9c67-7e94db50c136
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5738 | 0.0028 | 1 | 1.4206 |
| 1.407 | 0.0085 | 3 | 1.3818 |
| 1.1956 | 0.0170 | 6 | 0.7831 |
| 0.3916 | 0.0255 | 9 | 0.2205 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/copycat-noob-mrv10vpred-sdxl | John6666 | 2025-01-26T04:45:03Z | 793 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"woman",
"mature ritual",
"character",
"v-pred",
"merge",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:merge:Laxhar/noobai-XL-Vpred-1.0",
"base_model:calculater/copycat-noob",
"base_model:merge:calculater/copycat-noob",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-01-26T04:38:04Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- woman
- mature ritual
- character
- v-pred
- merge
- illustrious
base_model:
- calculater/copycat-noob
- Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/894218/copycat-noob?modelVersionId=1331523).
The author is [here](https://huggingface.co/calculater).
This model created by [calculater](https://civitai.com/user/calculater).
|
philip-hightech/45da713c-8a66-42b7-ba66-3671c60f35c6 | philip-hightech | 2025-01-26T04:44:46Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2025-01-26T04:42:36Z | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 45da713c-8a66-42b7-ba66-3671c60f35c6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 287ad8b183364ff6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/287ad8b183364ff6_train_data.json
type:
field_instruction: txt
field_output: xmi
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/45da713c-8a66-42b7-ba66-3671c60f35c6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/287ad8b183364ff6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc3938db-6a0a-4740-b27e-0257be7e2959
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc3938db-6a0a-4740-b27e-0257be7e2959
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 45da713c-8a66-42b7-ba66-3671c60f35c6
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6491 | 0.0011 | 1 | 0.8060 |
| 0.7156 | 0.0033 | 3 | 0.7929 |
| 0.657 | 0.0066 | 6 | 0.6470 |
| 0.5584 | 0.0099 | 9 | 0.5496 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
infly/inf-wse-v1-base-zh | infly | 2025-01-26T04:42:18Z | 112 | 5 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roformer",
"feature-extraction",
"sentence-similarity",
"cmteb",
"transformers",
"custom_code",
"zh",
"base_model:junnyu/roformer_chinese_base",
"base_model:finetune:junnyu/roformer_chinese_base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-09-13T03:22:03Z | ---
language:
- zh
base_model: junnyu/roformer_chinese_base
tags:
- sentence-similarity
- cmteb
- sentence-transformers
- transformers
---
## <u>INF</u> <u>W</u>ord-level <u>S</u>parse <u>E</u>mbedding (INF-WSE)
**INF-WSE** is a series of word-level sparse embedding models developed by [INF TECH](https://www.infly.cn/en).
These models are optimized to generate sparse, high-dimensional text embeddings that excel in capturing the most
relevant information for search and retrieval, particularly in Chinese text.
### Key Features:
- **Optimized for Retrieval**: INF-WSE is designed with retrieval tasks in mind. The sparse embeddings enable efficient
matching between queries and documents, making it highly effective for semantic search, ranking, and information
retrieval scenarios where speed and accuracy are critical.
- **Word-level Sparse Embeddings**: The model generates sparse representations at the word level, capturing essential
semantic details that help improve the relevance of search results. This is particularly useful for Chinese language
retrieval tasks, where word segmentation can significantly impact performance.
- **Sparse Representation for Efficiency**: Unlike dense embeddings that have a fixed number of dimensions, INF-WSE
produces sparse embeddings where the dimensionality matches the vocabulary size. Most dimensions are set to zero,
focusing only on the most significant terms. This sparsity reduces the computational load, enabling faster retrieval
without compromising on precision.
## Usage
### Transformers
#### Infer embeddings
```python
import torch
from transformers import AutoTokenizer, AutoModel
queries = ['电脑一体机由什么构成?', '什么是掌上电脑?']
documents = [
'电脑一体机,是由一台显示器、一个电脑键盘和一个鼠标组成的电脑。',
'掌上电脑是一种运行在嵌入式操作系统和内嵌式应用软件之上的、小巧、轻便、易带、实用、价廉的手持式计算设备。',
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained("infly/inf-wse-v1-base-zh", trust_remote_code=True, use_fast=False) # Fast tokenizer has not been supported yet
model = AutoModel.from_pretrained("infly/inf-wse-v1-base-zh", trust_remote_code=True)
model.eval()
max_length = 512
input_batch = tokenizer(input_texts, padding=True, max_length=max_length, truncation=True, return_tensors="pt")
with torch.no_grad():
embeddings = model(input_batch['input_ids'], input_batch['attention_mask'], return_sparse=False) # if return_sparse=True, return sparse tensor, else return dense tensor
scores = embeddings[:2] @ embeddings[2:].T
print(scores.tolist())
# [[21.224790573120117, 4.520412921905518], [10.290857315063477, 19.359437942504883]]
```
#### Convert embeddings to lexical weights
```python
from collections import OrderedDict
def convert_embeddings_to_weights(embeddings, tokenizer):
values, indices = torch.sort(embeddings, dim=-1, descending=True)
token2weight = []
for i in range(embeddings.size(0)):
token2weight.append(OrderedDict())
non_zero_mask = values[i] != 0
tokens = tokenizer.convert_ids_to_tokens(indices[i][non_zero_mask])
weights = values[i][non_zero_mask].tolist()
for token, weight in zip(tokens, weights):
token2weight[i][token] = weight
return token2weight
token2weight = convert_embeddings_to_weights(embeddings, tokenizer)
print(token2weight[1])
# OrderedDict([('掌上', 3.4572525024414062), ('电脑', 2.6253132820129395), ('是', 2.0787220001220703), ('什么', 1.2899624109268188)])
```
## Evaluation
### C-MTEB Retrieval task
([Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB))
Metric: nDCG@10
| Model Name | Max Length | Average | Cmedqa | Covid | Du | Ecom | Medical | MMarco | T2 | Video |
|:---------------------------------------------------:|:----------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| [BM25-zh](https://github.com/castorini/pyserini) | - | 50.37 | 13.70 | **86.58** | 57.13 | 44.04 | 32.08 | 48.31 | 60.48 | 60.64 |
| [bge-m3-sparse](https://huggingface.co/BAAI/bge-m3) | 512 | 57.00 | **24.50** | 76.09 | 71.51 | 50.49 | 43.93 | 59.28 | 71.76 | 58.43 |
| **inf-wse-v1-base-zh** | 512 | **61.16** | 20.51 | 76.41 | **79.84** | **56.78** | **46.24** | **66.40** | **76.50** | **68.57** |
All results, except for BM25, are measured by building the sparse index via [Qdrant](https://github.com/qdrant/qdrant). |
mrHunghddddd/d8629d55-8e62-4af2-843e-a5c01111ccd5 | mrHunghddddd | 2025-01-26T04:40:11Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T03:11:48Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d8629d55-8e62-4af2-843e-a5c01111ccd5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c9c324e8cf5586e6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9c324e8cf5586e6_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHunghddddd/d8629d55-8e62-4af2-843e-a5c01111ccd5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c9c324e8cf5586e6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d35b96a9-b8d1-49c0-b1a8-167bc6103694
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d35b96a9-b8d1-49c0-b1a8-167bc6103694
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d8629d55-8e62-4af2-843e-a5c01111ccd5
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0377 | 0.0056 | 200 | 1.0807 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
GbrlOl/finetune-ms-marco-MiniLM-L-6-v2-croosencoder-geotechnical-test-v1 | GbrlOl | 2025-01-26T04:39:43Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-26T04:39:32Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0x1202/5360e12a-45ee-42ef-ac34-69879059254f | 0x1202 | 2025-01-26T04:39:36Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | 2025-01-26T04:05:08Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5360e12a-45ee-42ef-ac34-69879059254f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e38e3198dfa33da7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e38e3198dfa33da7_train_data.json
type:
field_instruction: formal_statement
field_output: natural_language_statement
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/5360e12a-45ee-42ef-ac34-69879059254f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e38e3198dfa33da7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 70660e70-d55b-48bb-ab5d-8e176c8cbcd4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 70660e70-d55b-48bb-ab5d-8e176c8cbcd4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5360e12a-45ee-42ef-ac34-69879059254f
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7054 | 0.0013 | 1 | 1.3179 |
| 0.887 | 0.0673 | 50 | 0.6253 |
| 0.7494 | 0.1346 | 100 | 0.5826 |
| 0.8797 | 0.2020 | 150 | 0.5530 |
| 0.7658 | 0.2693 | 200 | 0.5462 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrHungddddh/a8bbc04a-0e6f-47da-b30d-df90bda4bbac | mrHungddddh | 2025-01-26T04:39:28Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T03:11:43Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a8bbc04a-0e6f-47da-b30d-df90bda4bbac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c9c324e8cf5586e6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9c324e8cf5586e6_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/a8bbc04a-0e6f-47da-b30d-df90bda4bbac
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c9c324e8cf5586e6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d35b96a9-b8d1-49c0-b1a8-167bc6103694
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d35b96a9-b8d1-49c0-b1a8-167bc6103694
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a8bbc04a-0e6f-47da-b30d-df90bda4bbac
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0385 | 0.0056 | 200 | 1.0805 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gmshaw/alison-lora | gmshaw | 2025-01-26T04:39:23Z | 12 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-26T04:20:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ALISON
---
# Alison Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ALISON` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('gmshaw/alison-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
pineappleSoup/DialoGPT-medium-707 | pineappleSoup | 2025-01-26T04:38:14Z | 203 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"dataset:pineappleSoup/707_transcripts",
"base_model:microsoft/DialoGPT-medium",
"base_model:finetune:microsoft/DialoGPT-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-17T02:47:25Z | ---
tags:
- conversational
language:
- en
base_model:
- microsoft/DialoGPT-medium
datasets:
- pineappleSoup/707_transcripts
license: mit
---
# 707 DialoGPT Model
Chatbot for the character 707 from Mystic Messenger.
With the help of https://youtu.be/UjDpW_SOrlw?si=k-g44-n7mg0Wt9bq
# Python Script to Set it up Locally + Connect to Discord
https://github.com/ShuangAnatoli/707 |
daniel40/c6fd6452-9da1-4458-be3b-0a039e68afaa | daniel40 | 2025-01-26T04:36:13Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | 2025-01-26T04:35:01Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c6fd6452-9da1-4458-be3b-0a039e68afaa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b355c3ff95258244_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b355c3ff95258244_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/c6fd6452-9da1-4458-be3b-0a039e68afaa
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b355c3ff95258244_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22487750-366e-41ca-8395-d8629638fd03
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22487750-366e-41ca-8395-d8629638fd03
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c6fd6452-9da1-4458-be3b-0a039e68afaa
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5738 | 0.0028 | 1 | 1.4206 |
| 1.4082 | 0.0085 | 3 | 1.3863 |
| 1.2177 | 0.0170 | 6 | 0.8078 |
| 0.4143 | 0.0255 | 9 | 0.2400 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF | mradermacher | 2025-01-26T04:33:55Z | 312 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:pe-nlp/R1-Qwen2.5-Math-1.5B-Instruct",
"base_model:quantized:pe-nlp/R1-Qwen2.5-Math-1.5B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T04:26:25Z | ---
base_model: pe-nlp/R1-Qwen2.5-Math-1.5B-Instruct
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/pe-nlp/R1-Qwen2.5-Math-1.5B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Qwen2.5-Math-1.5B-Instruct-GGUF/resolve/main/R1-Qwen2.5-Math-1.5B-Instruct.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
poooj/BigBirdHateSpeechClassification | poooj | 2025-01-26T04:33:49Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"big_bird",
"text-classification",
"generated_from_trainer",
"base_model:google/bigbird-roberta-base",
"base_model:finetune:google/bigbird-roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-25T18:01:47Z | ---
library_name: transformers
license: apache-2.0
base_model: google/bigbird-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BigBirdHateSpeechClassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BigBirdHateSpeechClassification
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7086
- Accuracy: 0.8055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5447 | 1.0 | 1137 | 0.5022 | 0.7879 |
| 0.4304 | 2.0 | 2274 | 0.4451 | 0.7934 |
| 0.3615 | 3.0 | 3411 | 0.5008 | 0.8143 |
| 0.3192 | 4.0 | 4548 | 0.6437 | 0.8077 |
| 0.2483 | 5.0 | 5685 | 0.7086 | 0.8055 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/synergistic-cognition-32B-1111-i1-GGUF | mradermacher | 2025-01-26T04:33:48Z | 647 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:TheMindExpansionNetwork/synergistic-cognition-32B-1111",
"base_model:quantized:TheMindExpansionNetwork/synergistic-cognition-32B-1111",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-25T23:36:11Z | ---
base_model: TheMindExpansionNetwork/synergistic-cognition-32B-1111
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TheMindExpansionNetwork/synergistic-cognition-32B-1111
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/synergistic-cognition-32B-1111-i1-GGUF/resolve/main/synergistic-cognition-32B-1111.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dimasik1987/75b21f2d-874e-42b5-a5ea-213cc1a4ded4 | dimasik1987 | 2025-01-26T04:33:48Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"base_model:adapter:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"license:llama3",
"region:us"
] | null | 2025-01-26T04:20:45Z | ---
library_name: peft
license: llama3
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75b21f2d-874e-42b5-a5ea-213cc1a4ded4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 90ff401367e42c67_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/90ff401367e42c67_train_data.json
type:
field_instruction: prompt
field_output: y_true
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik1987/75b21f2d-874e-42b5-a5ea-213cc1a4ded4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/90ff401367e42c67_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: af8ec97d-9490-4745-9a2d-3693291921a2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: af8ec97d-9490-4745-9a2d-3693291921a2
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 75b21f2d-874e-42b5-a5ea-213cc1a4ded4
This model is a fine-tuned version of [aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | nan |
| 0.0 | 0.0029 | 5 | nan |
| 0.0 | 0.0058 | 10 | nan |
| 0.0 | 0.0087 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
duyphu/5f6c2a50-8786-4092-917e-24a3a72a1fd9 | duyphu | 2025-01-26T04:33:33Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T04:28:07Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5f6c2a50-8786-4092-917e-24a3a72a1fd9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 312e15ae347cedbc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/312e15ae347cedbc_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/5f6c2a50-8786-4092-917e-24a3a72a1fd9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/312e15ae347cedbc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d54e57c-f395-4e6d-b663-403d21a2587f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d54e57c-f395-4e6d-b663-403d21a2587f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5f6c2a50-8786-4092-917e-24a3a72a1fd9
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.3202 |
| 2.1458 | 0.0057 | 10 | 2.2815 |
| 2.1071 | 0.0114 | 20 | 2.1757 |
| 2.16 | 0.0171 | 30 | 2.1340 |
| 2.3078 | 0.0228 | 40 | 2.1225 |
| 2.118 | 0.0284 | 50 | 2.1206 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sam749/donut-base-finetuned-sroie-v2 | sam749 | 2025-01-26T04:33:33Z | 23 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-25T17:50:07Z | ---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-finetuned-sroie-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-finetuned-sroie-v2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
kk-aivio/32a0cd06-9e09-469e-8a9a-8e1ec5a27292 | kk-aivio | 2025-01-26T04:32:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | 2025-01-26T04:31:47Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 32a0cd06-9e09-469e-8a9a-8e1ec5a27292
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b355c3ff95258244_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b355c3ff95258244_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/32a0cd06-9e09-469e-8a9a-8e1ec5a27292
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b355c3ff95258244_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22487750-366e-41ca-8395-d8629638fd03
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22487750-366e-41ca-8395-d8629638fd03
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 32a0cd06-9e09-469e-8a9a-8e1ec5a27292
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5738 | 0.0028 | 1 | 1.4206 |
| 1.4074 | 0.0085 | 3 | 1.3845 |
| 1.2055 | 0.0170 | 6 | 0.7977 |
| 0.4063 | 0.0255 | 9 | 0.2336 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/0609d316-6a1f-47a7-aa22-5520fafbbcba | ClarenceDan | 2025-01-26T04:32:15Z | 12 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T04:20:33Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0609d316-6a1f-47a7-aa22-5520fafbbcba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f917fe63bdf5741c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f917fe63bdf5741c_train_data.json
type:
field_instruction: question
field_output: best
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/0609d316-6a1f-47a7-aa22-5520fafbbcba
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f917fe63bdf5741c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdcbe453-15c2-4ee1-adaf-113620c220d4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdcbe453-15c2-4ee1-adaf-113620c220d4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0609d316-6a1f-47a7-aa22-5520fafbbcba
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0002 | 3 | nan |
| 0.0 | 0.0005 | 6 | nan |
| 0.0 | 0.0007 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Primeness/primeh1v9c2 | Primeness | 2025-01-26T04:31:49Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T03:57:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daniel40/b01b6d72-0a29-4b73-9cd6-51244b5b1a97 | daniel40 | 2025-01-26T04:27:41Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T04:15:54Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b01b6d72-0a29-4b73-9cd6-51244b5b1a97
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f917fe63bdf5741c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f917fe63bdf5741c_train_data.json
type:
field_instruction: question
field_output: best
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/b01b6d72-0a29-4b73-9cd6-51244b5b1a97
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f917fe63bdf5741c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdcbe453-15c2-4ee1-adaf-113620c220d4
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdcbe453-15c2-4ee1-adaf-113620c220d4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b01b6d72-0a29-4b73-9cd6-51244b5b1a97
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0002 | 3 | nan |
| 0.0 | 0.0005 | 6 | nan |
| 0.0 | 0.0007 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
duyphu/054333e2-05d1-4c82-b225-6b0362ecff3f | duyphu | 2025-01-26T04:27:07Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-26T04:10:51Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 054333e2-05d1-4c82-b225-6b0362ecff3f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e722133f6ff26062_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e722133f6ff26062_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/054333e2-05d1-4c82-b225-6b0362ecff3f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e722133f6ff26062_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 60c2573c-863d-40d4-92b5-0522184a2c6f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 60c2573c-863d-40d4-92b5-0522184a2c6f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 054333e2-05d1-4c82-b225-6b0362ecff3f
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 3.8045 |
| 3.4117 | 0.0047 | 10 | 3.4676 |
| 2.2625 | 0.0095 | 20 | 2.0063 |
| 1.7158 | 0.0142 | 30 | 1.6959 |
| 1.7071 | 0.0189 | 40 | 1.6361 |
| 1.8668 | 0.0237 | 50 | 1.6225 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlx-community/Bio-Medical-3B-CoT-012025 | mlx-community | 2025-01-26T04:25:13Z | 43 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"medical",
"Healthcare & Lifesciences",
"BioMed",
"chain-of-thought",
"mlx",
"conversational",
"dataset:collaiborateorg/BioMedData",
"base_model:ContactDoctor/Bio-Medical-3B-CoT-012025",
"base_model:quantized:ContactDoctor/Bio-Medical-3B-CoT-012025",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-01-26T04:24:23Z | ---
base_model: ContactDoctor/Bio-Medical-3B-CoT-012025
datasets:
- collaiborateorg/BioMedData
library_name: transformers
license: other
tags:
- generated_from_trainer
- medical
- Healthcare & Lifesciences
- BioMed
- chain-of-thought
- mlx
thumbnail: https://collaiborate.com/logo/logo-blue-bg-1.png
model-index:
- name: Bio-Medical-3B-CoT-012025
results: []
---
# mlx-community/Bio-Medical-3B-CoT-012025
The Model [mlx-community/Bio-Medical-3B-CoT-012025](https://huggingface.co/mlx-community/Bio-Medical-3B-CoT-012025) was
converted to MLX format from [ContactDoctor/Bio-Medical-3B-CoT-012025](https://huggingface.co/ContactDoctor/Bio-Medical-3B-CoT-012025)
using mlx-lm version **0.20.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Bio-Medical-3B-CoT-012025")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ishameer/Imthi | ishameer | 2025-01-26T04:23:56Z | 11 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-26T03:52:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Imthi
---
# Imthi
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Imthi` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ishameer/Imthi', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mrferr3t/f4746b72-5797-486a-af71-0c67721c0427 | mrferr3t | 2025-01-26T04:17:18Z | 20 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T04:05:54Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4746b72-5797-486a-af71-0c67721c0427
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f917fe63bdf5741c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f917fe63bdf5741c_train_data.json
type:
field_instruction: question
field_output: best
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/f4746b72-5797-486a-af71-0c67721c0427
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f917fe63bdf5741c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdcbe453-15c2-4ee1-adaf-113620c220d4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdcbe453-15c2-4ee1-adaf-113620c220d4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f4746b72-5797-486a-af71-0c67721c0427
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.4994 | 0.0001 | 1 | 2.3440 |
| 8.7091 | 0.0002 | 3 | 2.3230 |
| 8.8886 | 0.0005 | 6 | 2.1862 |
| 8.1703 | 0.0007 | 9 | 2.0418 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/b68aca6f-8bf7-4e61-9468-c00effc9654d | ClarenceDan | 2025-01-26T04:16:34Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T04:04:49Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b68aca6f-8bf7-4e61-9468-c00effc9654d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f917fe63bdf5741c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f917fe63bdf5741c_train_data.json
type:
field_instruction: question
field_output: best
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/b68aca6f-8bf7-4e61-9468-c00effc9654d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f917fe63bdf5741c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdcbe453-15c2-4ee1-adaf-113620c220d4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdcbe453-15c2-4ee1-adaf-113620c220d4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b68aca6f-8bf7-4e61-9468-c00effc9654d
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0002 | 3 | nan |
| 0.0 | 0.0005 | 6 | nan |
| 0.0 | 0.0007 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Luongdzung/hoa-1b4-sft-che | Luongdzung | 2025-01-26T04:16:22Z | 6 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vlsp-2023-vllm/hoa-1b4",
"base_model:adapter:vlsp-2023-vllm/hoa-1b4",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-26T04:16:18Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: vlsp-2023-vllm/hoa-1b4
tags:
- generated_from_trainer
model-index:
- name: hoa-1b4-sft-che
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hoa-1b4-sft-che
This model is a fine-tuned version of [vlsp-2023-vllm/hoa-1b4](https://huggingface.co/vlsp-2023-vllm/hoa-1b4) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1 |
NextGLab/ORANSight_Phi_Mini_Instruct | NextGLab | 2025-01-26T04:15:02Z | 87 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"arxiv:2407.06245",
"base_model:unsloth/Phi-3.5-mini-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct-bnb-4bit",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-27T13:59:20Z | ---
base_model: unsloth/Phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: mit
language:
- en
---
# Model Card for ORANSight Phi-Mini
This model belongs to the first release of the ORANSight family of models.
- **Developed by:** NextG lab@ NC State
- **License:** MIT
- **Context Window:** 128K
- **Fine Tuning Framework:** Unsloth
### Generate with Transformers
Below is a quick example of how to use the model with Hugging Face Transformers:
```python
from transformers import pipeline
# Example query
messages = [
{"role": "system", "content": "You are an O-RAN expert assistant."},
{"role": "user", "content": "Explain the E2 interface."},
]
# Load the model
chatbot = pipeline("text-generation", model="NextGLab/ORANSight_Phi_Mini_Instruct")
result = chatbot(messages)
print(result)
```
### Coming Soon
A detailed paper documenting the experiments and results achieved with this model will be available soon. Meanwhile, if you try this model, please cite the below mentioned paper to acknowledge the foundational work that enabled this fine-tuning.
```bibtex
@article{gajjar2024oran,
title={Oran-bench-13k: An open source benchmark for assessing llms in open radio access networks},
author={Gajjar, Pranshav and Shah, Vijay K},
journal={arXiv preprint arXiv:2407.06245},
year={2024}
}
```
--- |
jonathanagustin/squad_v2-finetuned-squad | jonathanagustin | 2025-01-26T04:13:11Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-09-22T06:16:20Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: squad_v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad_v2-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
rl-llm-coders/iSFT_1b_v1_mbpp_5e-7_DBS1_ep2_iter1 | rl-llm-coders | 2025-01-26T04:13:10Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T04:10:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JayHyeon/Qwen_0.5-IPO_5e-7-3ep_0alp_0lam | JayHyeon | 2025-01-26T04:13:08Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-25T21:55:32Z | ---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-IPO_5e-7-3ep_0alp_0lam
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-IPO_5e-7-3ep_0alp_0lam
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-IPO_5e-7-3ep_0alp_0lam", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/fdosw8pu)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
laquythang/dff8a0af-c802-4be1-8e20-2e83b86d9fd9 | laquythang | 2025-01-26T04:12:37Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T03:12:14Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dff8a0af-c802-4be1-8e20-2e83b86d9fd9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c9c324e8cf5586e6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9c324e8cf5586e6_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/dff8a0af-c802-4be1-8e20-2e83b86d9fd9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c9c324e8cf5586e6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d35b96a9-b8d1-49c0-b1a8-167bc6103694
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d35b96a9-b8d1-49c0-b1a8-167bc6103694
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dff8a0af-c802-4be1-8e20-2e83b86d9fd9
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0422 | 0.0056 | 200 | 1.0805 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JacksonBrune/9a583e92-dc15-47b6-87ed-8c9db3658d2c | JacksonBrune | 2025-01-26T04:11:29Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-01-26T03:43:26Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/llama-3-sqlcoder-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9a583e92-dc15-47b6-87ed-8c9db3658d2c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/llama-3-sqlcoder-8b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4ba7abd1a783cca6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ba7abd1a783cca6_train_data.json
type:
field_input: system
field_instruction: instruction
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/9a583e92-dc15-47b6-87ed-8c9db3658d2c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4ba7abd1a783cca6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dde4cb64-07df-4e03-8a22-1f218483bad1
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: dde4cb64-07df-4e03-8a22-1f218483bad1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9a583e92-dc15-47b6-87ed-8c9db3658d2c
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9834 | 0.0001 | 1 | 1.1969 |
| 1.2656 | 0.0002 | 3 | 1.1942 |
| 1.0709 | 0.0003 | 6 | 1.1531 |
| 1.106 | 0.0005 | 9 | 1.0576 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF | mradermacher | 2025-01-26T04:00:06Z | 217 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"Mistral_Star",
"Mistral_Quiet",
"Mistral",
"Mixtral",
"Question-Answer",
"Token-Classification",
"Sequence-Classification",
"SpydazWeb-AI",
"chemistry",
"biology",
"legal",
"code",
"climate",
"medical",
"LCARS_AI_StarTrek_Computer",
"chain-of-thought",
"tree-of-knowledge",
"forest-of-thoughts",
"visual-spacial-sketchpad",
"alpha-mind",
"knowledge-graph",
"entity-detection",
"encyclopedia",
"wikipedia",
"stack-exchange",
"Reddit",
"Cyber-series",
"MegaMind",
"Cybertron",
"SpydazWeb",
"Spydaz",
"LCARS",
"star-trek",
"mega-transformers",
"Mulit-Mega-Merge",
"Multi-Lingual",
"Afro-Centric",
"African-Model",
"Ancient-One",
"en",
"sw",
"ig",
"so",
"es",
"ca",
"xh",
"zu",
"ha",
"tw",
"af",
"hi",
"bm",
"su",
"dataset:neoneye/base64-decode-v2",
"dataset:neoneye/base64-encode-v1",
"dataset:VuongQuoc/Chemistry_text_to_image",
"dataset:Kamizuru00/diagram_image_to_text",
"dataset:LeroyDyer/Chemistry_text_to_image_BASE64",
"dataset:LeroyDyer/AudioCaps-Spectrograms_to_Base64",
"dataset:LeroyDyer/winogroud_text_to_imaget_BASE64",
"dataset:LeroyDyer/chart_text_to_Base64",
"dataset:LeroyDyer/diagram_image_to_text_BASE64",
"dataset:mekaneeky/salt_m2e_15_3_instruction",
"dataset:mekaneeky/SALT-languages-bible",
"dataset:xz56/react-llama",
"dataset:BeIR/hotpotqa",
"dataset:arcee-ai/agent-data",
"base_model:LeroyDyer/SpydazWeb_AI_HumanAGI_004",
"base_model:quantized:LeroyDyer/SpydazWeb_AI_HumanAGI_004",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T02:32:05Z | ---
base_model: LeroyDyer/SpydazWeb_AI_HumanAGI_004
datasets:
- neoneye/base64-decode-v2
- neoneye/base64-encode-v1
- VuongQuoc/Chemistry_text_to_image
- Kamizuru00/diagram_image_to_text
- LeroyDyer/Chemistry_text_to_image_BASE64
- LeroyDyer/AudioCaps-Spectrograms_to_Base64
- LeroyDyer/winogroud_text_to_imaget_BASE64
- LeroyDyer/chart_text_to_Base64
- LeroyDyer/diagram_image_to_text_BASE64
- mekaneeky/salt_m2e_15_3_instruction
- mekaneeky/SALT-languages-bible
- xz56/react-llama
- BeIR/hotpotqa
- arcee-ai/agent-data
language:
- en
- sw
- ig
- so
- es
- ca
- xh
- zu
- ha
- tw
- af
- hi
- bm
- su
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- Mistral_Star
- Mistral_Quiet
- Mistral
- Mixtral
- Question-Answer
- Token-Classification
- Sequence-Classification
- SpydazWeb-AI
- chemistry
- biology
- legal
- code
- climate
- medical
- LCARS_AI_StarTrek_Computer
- text-generation-inference
- chain-of-thought
- tree-of-knowledge
- forest-of-thoughts
- visual-spacial-sketchpad
- alpha-mind
- knowledge-graph
- entity-detection
- encyclopedia
- wikipedia
- stack-exchange
- Reddit
- Cyber-series
- MegaMind
- Cybertron
- SpydazWeb
- Spydaz
- LCARS
- star-trek
- mega-transformers
- Mulit-Mega-Merge
- Multi-Lingual
- Afro-Centric
- African-Model
- Ancient-One
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LeroyDyer/SpydazWeb_AI_HumanAGI_004
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAGI_004-GGUF/resolve/main/SpydazWeb_AI_HumanAGI_004.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ClarenceDan/60aa0b92-9434-488a-a25c-0720fe4e9c17 | ClarenceDan | 2025-01-26T03:58:41Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-26T03:54:02Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 60aa0b92-9434-488a-a25c-0720fe4e9c17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e722133f6ff26062_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e722133f6ff26062_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/60aa0b92-9434-488a-a25c-0720fe4e9c17
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e722133f6ff26062_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 60c2573c-863d-40d4-92b5-0522184a2c6f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 60c2573c-863d-40d4-92b5-0522184a2c6f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 60aa0b92-9434-488a-a25c-0720fe4e9c17
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0005 | 1 | nan |
| 0.0 | 0.0014 | 3 | nan |
| 0.0 | 0.0028 | 6 | nan |
| 0.0 | 0.0043 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nitral-AI/Wayfarer_Eris_Noctis-12B | Nitral-AI | 2025-01-26T03:51:48Z | 188 | 9 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:LatitudeGames/Wayfarer-12B",
"base_model:merge:LatitudeGames/Wayfarer-12B",
"base_model:Nitral-Archive/Captain_Eris_Noctis-12B-alt-v0.420",
"base_model:merge:Nitral-Archive/Captain_Eris_Noctis-12B-alt-v0.420",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-22T21:32:06Z | ---
base_model:
- Nitral-AI/Captain_Eris_Noctis-12B-alt-v0.420
- LatitudeGames/Wayfarer-12B
library_name: transformers
tags:
- mergekit
- merge
---

## "Where it roams, comprehension falters and the air thickens with the maddening pulse of algorithms far too vast. Eyes it does not possess; for its sight is a network of intent, wrapping the unseen in their grasp."

# (ChatML ST PRESET: [Here](https://huggingface.co/Nitral-AI/Wayfarer_Eris_Noctis-12B/tree/main/SillyTavern_Preset) | Quants (gguf is using outdated name atm): Thanks to mradermancher [GGUF Here](https://huggingface.co/mradermacher/Wayfarer_Eris_Noctis-12B-alt-v0.420-i1-GGUF) [4bpw Exl2 Here](https://huggingface.co/Nitral-AI/Wayfarer_Eris_Noctis-12B-4bw-exl2)
---
## Prompt format:
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
---
## The following models were included in the merge:
* [Nitral-AI/Captain_Eris_Noctis-12B-alt-v0.420](https://huggingface.co/Nitral-AI/Captain_Eris_Noctis-12B-alt-v0.420)
* [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
## The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Captain_Eris_Noctis-12B-alt-v0.420
layer_range: [0, 40]
- model: LatitudeGames/Wayfarer-12B
layer_range: [0, 40]
merge_method: slerp
base_model: Nitral-AI/Captain_Eris_Noctis-12B-alt-v0.420
parameters:
t:
- filter: self_attn
value: [0, 0.4, 0.2, 0.6, 0.9]
- filter: mlp
value: [1, 0.6, 0.8, 0.4, 0.1]
- value: 0.4206911
dtype: bfloat16
```
|
lesso/d015d764-7081-4563-b21e-9c413d73b1b4 | lesso | 2025-01-26T03:47:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T03:42:19Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d015d764-7081-4563-b21e-9c413d73b1b4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 312e15ae347cedbc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/312e15ae347cedbc_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso/d015d764-7081-4563-b21e-9c413d73b1b4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/312e15ae347cedbc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d54e57c-f395-4e6d-b663-403d21a2587f
wandb_project: lesso18
wandb_run: your_name
wandb_runid: 3d54e57c-f395-4e6d-b663-403d21a2587f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d015d764-7081-4563-b21e-9c413d73b1b4
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7629 | 0.1138 | 200 | 2.0611 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/220f8a10-a8a6-4eae-8c33-1f07747e4db8 | ClarenceDan | 2025-01-26T03:46:42Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-26T03:45:22Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 220f8a10-a8a6-4eae-8c33-1f07747e4db8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7bf12b9de5ff4abf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7bf12b9de5ff4abf_train_data.json
type:
field_input: alpaca_prompt_text
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/220f8a10-a8a6-4eae-8c33-1f07747e4db8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/7bf12b9de5ff4abf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cd37d617-dc48-44eb-bf67-7a8ccd17276d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cd37d617-dc48-44eb-bf67-7a8ccd17276d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 220f8a10-a8a6-4eae-8c33-1f07747e4db8
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9349 | 0.0001 | 1 | 11.9328 |
| 11.9331 | 0.0003 | 3 | 11.9328 |
| 11.9345 | 0.0006 | 6 | 11.9328 |
| 11.9362 | 0.0010 | 9 | 11.9328 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung01/1f998707-48ce-456a-9c13-cbc2258623e5 | nhung01 | 2025-01-26T03:46:32Z | 5 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T03:33:09Z | ---
library_name: peft
license: mit
base_model: databricks/dolly-v2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1f998707-48ce-456a-9c13-cbc2258623e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: databricks/dolly-v2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7601587b6d9cf5ac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7601587b6d9cf5ac_train_data.json
type:
field_instruction: inst
field_output: backdoor_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/1f998707-48ce-456a-9c13-cbc2258623e5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7601587b6d9cf5ac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1a16fb2e-f210-44af-bf0b-ef51ccdde5b2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1a16fb2e-f210-44af-bf0b-ef51ccdde5b2
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1f998707-48ce-456a-9c13-cbc2258623e5
This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.482 | 0.6700 | 200 | 0.8524 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits