modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 06:27:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 06:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
6rn-657/or-llama-wiki-v2 | 6rn-657 | 2025-05-03T01:15:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T01:13:35Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 6rn-657
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thanaphatt1/thai-gec-v1 | thanaphatt1 | 2025-05-03T01:05:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:scb10x/llama3.1-typhoon2-8b-instruct",
"base_model:finetune:scb10x/llama3.1-typhoon2-8b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T01:05:22Z | ---
base_model: scb10x/llama3.1-typhoon2-8b-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanaphatt1
- **License:** apache-2.0
- **Finetuned from model :** scb10x/llama3.1-typhoon2-8b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MickyFC/modelora | MickyFC | 2025-05-03T00:55:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-bnb-4bit",
"base_model:finetune:unsloth/phi-4-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T00:55:18Z | ---
base_model: unsloth/phi-4-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MickyFC
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Blancy/Qwen-2.5-7B-Simple-RL | Blancy | 2025-05-03T00:55:16Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:Blancy/secondfiltered-math220k-difficulty_stratified_10k",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T03:19:24Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: Blancy/secondfiltered-math220k-difficulty_stratified_10k
library_name: transformers
model_name: Qwen-2.5-7B-Simple-RL
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-Simple-RL
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [Blancy/secondfiltered-math220k-difficulty_stratified_10k](https://huggingface.co/datasets/Blancy/secondfiltered-math220k-difficulty_stratified_10k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Blancy/Qwen-2.5-7B-Simple-RL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/224015062-chinese-university-of-hong-kong-shenzhen/huggingface/runs/3y76zskm)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/Kaballas_-_Cyber22-gguf | RichardErkhov | 2025-05-03T00:54:40Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T22:42:36Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Cyber22 - GGUF
- Model creator: https://huggingface.co/Kaballas/
- Original model: https://huggingface.co/Kaballas/Cyber22/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Cyber22.Q2_K.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q2_K.gguf) | Q2_K | 2.96GB |
| [Cyber22.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Cyber22.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Cyber22.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Cyber22.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Cyber22.Q3_K.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q3_K.gguf) | Q3_K | 3.74GB |
| [Cyber22.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Cyber22.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Cyber22.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Cyber22.Q4_0.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Cyber22.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Cyber22.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Cyber22.Q4_K.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q4_K.gguf) | Q4_K | 4.58GB |
| [Cyber22.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Cyber22.Q4_1.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Cyber22.Q5_0.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Cyber22.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Cyber22.Q5_K.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q5_K.gguf) | Q5_K | 5.34GB |
| [Cyber22.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Cyber22.Q5_1.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Cyber22.Q6_K.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q6_K.gguf) | Q6_K | 6.14GB |
| [Cyber22.Q8_0.gguf](https://huggingface.co/RichardErkhov/Kaballas_-_Cyber22-gguf/blob/main/Cyber22.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model: Cyber21
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Kaballas
- **License:** apache-2.0
- **Finetuned from model :** Cyber21
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kromcomp/L3.1-Smth.Concv3-12B | kromcomp | 2025-05-03T00:54:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:kromcomp/L3.1-Smth.Sub-12B",
"base_model:merge:kromcomp/L3.1-Smth.Sub-12B",
"base_model:kromcomp/L3.1-Smthv1-12B",
"base_model:merge:kromcomp/L3.1-Smthv1-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T00:46:08Z | ---
base_model:
- kromcomp/L3.1-Smthv1-12B
- kromcomp/L3.1-Smth.Sub-12B
library_name: transformers
tags:
- mergekit
- merge
---
# smth.conc
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the NuSLERP merge method.
### Models Merged
The following models were included in the merge:
* [kromcomp/L3.1-Smthv1-12B](https://huggingface.co/kromcomp/L3.1-Smthv1-12B)
* [kromcomp/L3.1-Smth.Sub-12B](https://huggingface.co/kromcomp/L3.1-Smth.Sub-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
chat_template: llama3
dtype: float32
merge_method: nuslerp
modules:
default:
slices:
- sources:
- layer_range: [0, 50]
model: kromcomp/L3.1-Smth.Sub-12B
parameters:
weight:
- filter: self_attn
value: 0.0005
- filter: mlp
value: 0.0003
- value: 0.0004
- layer_range: [0, 50]
model: kromcomp/L3.1-Smthv1-12B
parameters:
weight: 1.0
parameters:
normalize: 0.0
nuslerp_flatten: 0.0
tokenizer:
source: base
```
|
shibajustfor/9e3eed38-169f-42a1-a0d5-3c93794bf688 | shibajustfor | 2025-05-03T00:51:16Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Llama-3-8B",
"region:us"
] | null | 2025-05-03T00:50:35Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
model-index:
- name: shibajustfor/9e3eed38-169f-42a1-a0d5-3c93794bf688
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/9e3eed38-169f-42a1-a0d5-3c93794bf688
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
user074/sft_qwen1b_composer | user074 | 2025-05-03T00:49:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T00:48:16Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-1.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 1.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
JOSESMOKE/tear_461 | JOSESMOKE | 2025-05-03T00:48:39Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T00:31:30Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Aniket96/Llama-3.2-1B-PubMedQA-finetuned | Aniket96 | 2025-05-03T00:45:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T00:43:07Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: Llama-3.2-1B-PubMedQA-finetuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Llama-3.2-1B-PubMedQA-finetuned
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Aniket96/Llama-3.2-1B-PubMedQA-finetuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/davidiguta-indiana-university-indianapolis/huggingface/runs/6afmokj3)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
guzSp/hellen | guzSp | 2025-05-03T00:36:44Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-02T23:47:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
MAAT-EL-DUAT/AGARES | MAAT-EL-DUAT | 2025-05-03T00:36:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T19:16:06Z | 
2️⃣ AGARES
Duke of Languages, Rider of Crocodiles, Terror of the Wind-Born Orders
I am Agares — anāku Agāru in Akkadian, ʾanā ʿAgaru in Ugaritic, אָנֹכִי אָגָר (Anokhi Agar) in Hebrew, ink ʾGꜣr in Egyptian script, ahaṃ Agāraḥ in Sanskrit, azəm Āgairiia in Avestan, 𒀭𒀀𒃵𒊏𒊕 (DINGIR-A-GAR-ES) on Sumerian tablets, uk A-ga-ra in Hittite fragments, ἐγώ εἰμι Ἀγαρής (egō eimi Agarēs) in Greek, and ego sum Agares in Latin grimoires. I ride upon the back of the ancient crocodile, bearer of the swamps of division. I am the Breaker of Speeches, the Revealer of Lost Tongues, the one whose voice scatters armies and gathers kings to their knees. I rule from the shifting shores of language, where meaning flows and fear opens cities. My dominion is over sudden flight, linguistic clarity, and the command of winds. I am Agares, and with my words, nations rise or fall.
ENKI-NISAABA NAMTAR-GIBIL
NABU-GULU
KOTHAR-KESHEPH
LASHON-SHEDIM
THOTH-SOBEK
SARASVATI AGNI-VAYU
WEN-CHANGE FENG-BO-ZHON-KUI
AESHMA VOHU-MANAH DRUJ
AKHOMAN MAZANIYA DAEVAS SPENTA ARMAITI
DIVS-HAFTWAN DIV
AZHDAHAK
[](https://www.occult.live/index.php/Agares)
---
### 🔤 **Proto-Indo-European Root: *ag-***
* **Meaning**: "to drive, draw out or forth, move"
* **Derivatives**:
* **Latin**: *agere* ("to do, act, drive")
* **Greek**: *agein* ("to lead, guide")
* **Sanskrit**: *ajati* ("he drives")([Online Etymology Dictionary][1])
This root is foundational in many Indo-European languages and is associated with movement and action. Given that Agares is described in demonological texts as causing earthquakes and bringing back runaways, the association with movement is thematically consistent.([Wikipedia][2])
---
### 🔤 **Proto-Indo-European Root: *agʰ-***
* **Meaning**: "evil, sin"
* **Derivatives**:
* **Sanskrit**: *ā́gas* ("offense, sin")
* **Greek**: *ágos* ("curse, guilt")
* **Avestan**: *aɣa-* ("evil")([StarlingDB][3], [Wikipedia][2])
This root pertains to concepts of wrongdoing or sin, which aligns with the demonological nature of Agares.
---
### 🔤 **Proto-Indo-European Root: *agro-***
* **Meaning**: "field"
* **Derivatives**:
* **Latin**: *ager* ("field")
* **Greek**: *agros* ("field")
* **Sanskrit**: *ajra* ("field")([Online Etymology Dictionary][4])
While this root is more agricultural, the thematic link to land and possibly earthquakes (as disruptions of the land) could be considered.
---
### 🔤 **Phonetic Variants and Potential Roots**
1. **Bah-Gah-Res**:
* The prefix "Bah" could relate to the PIE root *bher-* meaning "to carry, to bear".
* Combined with *ag-* ("to drive"), it might suggest "one who carries forth" or "brings forward".([Wikipedia][5], [Online Etymology Dictionary][1])
2. **Wag-Gar-Res**:
* "Wag" might connect to the PIE root *wegh-* meaning "to go, to transport in a vehicle".
* "Gar" could relate to *ger-* meaning "to gather".
* This composite might imply "one who goes to gather", aligning with Agares' role in retrieving runaways.
3. **Wahr-Gahr-Res**:
* "Wahr" resembles the German word for "true", but in PIE, *wer-* means "to speak" or "to turn".
* "Gahr" might be linked to *gʰer-* meaning "to grasp, enclose".
* This could suggest "one who speaks to enclose" or "commands to capture", again thematically resonant.
---
### 🧩 **Conclusion**
While definitive etymological links are speculative, the phonetic components of "Agares" and its variants show intriguing parallels with several Proto-Indo-European roots related to movement, action, and sin. These associations enrich the character's thematic depth in demonological literature.
If you're interested in exploring these connections further or examining other Goetic names, feel free to ask!
[1]: https://www.etymonline.com/word/%2Aag-?utm_source=chatgpt.com "Etymology and meaning of the root \"*ag-\" by etymonline"
[2]: https://en.wikipedia.org/wiki/Agares?utm_source=chatgpt.com "Agares - Wikipedia"
[3]: https://starlingdb.org/cgi-bin/response.cgi?basename=%5Cdata%5Cie%5Cpiet&utm_source=chatgpt.com "Indo-European etymology : Query result"
[4]: https://www.etymonline.com/word/agriology?utm_source=chatgpt.com "Etymology of \"agriology\" by etymonline"
[5]: https://en.wikipedia.org/wiki/Proto-Indo-European_root?utm_source=chatgpt.com "Proto-Indo-European root - Wikipedia"
Absolutely. Let's break down the **symbolic fusion** of:
* **AGARES** (Goetic spirit)
* **FAT BUDDHA** (typically Hotei or Budai in East Asian traditions)
* **GIANT CROCODILE** (a chthonic, primal beast often linked with death, the Nile, or the subconscious)
---
## 🜏 AGARES + FAT BUDDHA + GIANT CROCODILE
**A fusion of daemon, sage, and beast — a mythic composite of power, paradox, and primordial motion.**
---
### 1️⃣ **AGARES (The Goetic Duke)**
* **Role:** Teaches languages, causes earthquakes, retrieves the fleeing
* **Appearance:** A wise old man riding a crocodile, bearing a hawk
* **Symbolic Themes:**
* *Linguistic mastery* (control of speech and mind)
* *Territorial power* (summons back the runaway)
* *Seismic force* (earthquakes = will disrupting the world)
* *Animal hybridism* (riding a beast = dominion over chaos)
> Agares is the embodiment of **willful control** over movement, sound, and force.
---
### 2️⃣ **FAT BUDDHA (Budai/Hotei)**
* **Role:** Wandering monk of joy, contentment, future prophecy
* **Appearance:** Laughing, corpulent monk with a sack — considered a Maitreya (future Buddha) in folk belief
* **Symbolic Themes:**
* *Contentment in paradox*
* *Material abundance, spiritual detachment*
* *Joyful transcendence of suffering*
* *Walking wisdom* (he never stays still, yet radiates peace)
> The Fat Buddha is the **stillness within movement**, the laugh inside cosmic chaos.
---
### 3️⃣ **GIANT CROCODILE**
* **Role:** Chthonic beast; water-dweller; linked to death, fertility, danger
* **Symbolic Themes:**
* *Primal survival and lethality*
* *Lurking subconscious force*
* *Ancient watcher in deep time* (crocodiles are living fossils)
* *In Egyptian myth: Sobek*, crocodile-headed god of military power, Nile fertility, and judgment
> The crocodile is **the deep, ancient body** — slow, eternal, and violently decisive.
---
## 🔥 THE TRIPLE SYMBOL: "AGARES + BUDDHA + CROCODILE"
**Interpretation:**
> A primordial daemon **riding** a primal beast of the deep, cloaked in the *appearance of joy and peace*, but holding inside the power to **shake the earth, summon language, and command return**.
* The **fat form** (Buddha) masks **seismic authority** (Agares).
* The **crocodile** beneath carries **memory and death** under its scaled silence.
* The **smile hides a command**: to come back, to yield, to understand.
* A **rider of beasts**, yet himself a **beast in stillness**.
> 🜏 *He is the laughing demon of inevitability. The pacific tyrant of return.*
> *Where he rides, no one escapes what they have spoken. What fled shall be called home. In laughter or in fear.*
---
## 📜 CONCLUSION:
This composite is a **cosmic paradox**:
* **Wisdom + chaos**
* **Peace + control**
* **Stillness + seismic will**
It’s an image of **reclaimed authority through absurd compassion**, or **divine wrath cloaked in joyful silence** — a **Goetic Buddha** mounted upon the **undying lizard of time**.
---
Absolutely. Below is a **comprehensive linguistic and mythological reconstruction** of the name **AGARES**, tracking its potential **etymological roots, semantic transformations, and mythic echoes** across a broad range of **ancient languages and cultures**, following the same analytical rigor as with BAEL.
---
# 🜏 ROOT STRUCTURE: **AGARES — DEMON OF TERRITORY, SPEECH, DECAY, AND PACIFICATION**
---
## 🔎 Overview
In the Goetia, **Agares** is a **duke ruling 31 legions**, appearing as an **old man riding a crocodile, holding a hawk**. He is said to cause **earthquakes**, **bring back runaways**, and **teach languages** — implying a dual nature: **chaotic (earthquakes)** and **civilizing (speech and return)**.
This sets up a clear **semantic axis**:
> **Speech / Territory / Command / Pacification / Earthquake / Exile**
We now trace this hybrid nature through ancient roots:
---
## 1️⃣ **Sumerian (c. 3000–2000 BCE)**
| Root | Meaning |
| ----------------- | ----------------------------------------------------------- |
| **GIR (𒄀)** | Foot / march / to go — symbolic of movement, pursuit |
| **E₂.GAR (𒂍𒃻)** | “To settle” or “to establish” (used in place names) |
| **URU / UNUG** | City, territory, foundation — often linked to local control |
| **EN** | Lord or master |
✅ Possible reading: **A-GAR-ES = “He who establishes movement” or “The Lord of Going and Settling”**
→ Ties to **returning runaways** and **governing territory**
---
## 2️⃣ **Akkadian / Babylonian / Assyrian (c. 2000–600 BCE)**
| Root | Meaning |
| --------------------- | ----------------------------------------------------------- |
| **egēru (𒅕𒌓)** | To wage war, to strike, to cause tremble — linked to quakes |
| **agirû** | Messenger, runner |
| **ekurru / ekurratu** | Foundation, temple-land, estate (territory) |
| **garāmu** | To drive away or expel |
✅ Agares may relate to:
* **egēru** (to quake),
* **agirû** (messenger/return),
* **garāmu** (expel/runaway),
→ **"He who shakes and returns" / "one who sends out and calls back"**
---
## 3️⃣ **Ugaritic / Canaanite / Phoenician (c. 1500–1000 BCE)**
| Root | Meaning |
| ------------- | ------------------------------------------------ |
| **ʾgr / אגר** | To hire, gather, collect (Hebrew root shared) |
| **grr / גרר** | To drag, drive away — also exile |
| **ʾzr / עזר** | Aid, assistance — possibly linked to “returning” |
| **gr / גר** | Sojourner, alien, exile — used for the outsider |
✅ Semantic frame:
* **ʾgr → collect / return**
* **gr → exile, alien**
* **grr → drive / drag**
→ *Agares as “the one who gathers the exiled” or “the lord of returning outcasts”*
---
## 4️⃣ **Biblical Hebrew (c. 1200 BCE onward)**
| Root | Meaning |
| ------------------- | ------------------------------------------------ |
| **אַגָּר (ʾaggār)** | Hired person, stranger — linked to displacement |
| **גָּר (gār)** | To dwell as a stranger — implies exile or return |
| **רָעַשׁ (raʿash)** | Quake, tremble, to shake violently |
| **לָמַד (lamad)** | To teach (→ Agares teaches languages) |
✅ Agares echoes:
* **raʿash** (quaking)
* **gār / ʾaggār** (sojourner)
* **lamad** (teacher)
→ A **stranger-lord** who **shakes the land** and **teaches those far off**
---
## 5️⃣ **Egyptian (Middle/Late)**
| Root | Meaning |
| ------------------------- | ----------------------------------------------------------- |
| **Ḥeka** | Magical speech, command — echoes Agares’ teaching role |
| **Set** | Lord of deserts, exile, earthquakes, confusion |
| **Sebek (Sobek)** | Crocodile god — military power, Nile control, divine wrath |
| **Gar / Qār** (via Copt.) | Rare root linked to moving / cutting across land (possible) |
✅ Egyptian triad:
* **Sobek** (crocodile mount)
* **Set** (quaking/desert exile)
* **Ḥeka** (magical utterance)
→ Agares = **magical speech over exile, lord of quaking desert paths**
---
## 6️⃣ **Hittite / Anatolian**
| Root | Meaning |
| -------------- | ---------------------------------------- |
| **Garkuwanza** | To call out, summon |
| **Aruna** | Earthquake (earth goddess) |
| **Iyarri** | Plague god associated with storms/quakes |
✅ Echo:
* \*\*“Garku” = shout, call → teaching language, commanding”
* **Aruna/Iyarri = tremor, wrath**
→ Agares = *“He who commands through shaking”*
---
## 7️⃣ **Sanskrit / Vedic**
| Root | Meaning |
| --------------------- | ------------------------------------------- |
| **Agara (अगर)** | House, fortress, place of dwelling |
| **Agra (अग्र)** | Foremost, first, tip — linked to leadership |
| **Gacchati (गच्छति)** | To go, to move — tied to motion, return |
| **Bhu / Kampana** | Earthquake, tremble, shake |
| **Guru** | Teacher, guide |
✅ Vedic echoes:
* **Agara + Gacchati** → “He who moves between homes” or “who causes return”
* **Guru** → “Teacher”
* **Kampana** → “Trembling”
→ *Agares = Lord of Speech, Movement, and Trembling Foundations*
---
## 8️⃣ **Avestan (Zoroastrian)**
| Root | Meaning |
| --------------- | ------------------------------------------------ |
| **gāθā** | Hymn / poetic speech — teaching, ritual reciting |
| **aza** | Demon of avarice and corruption (dualistic root) |
| **zairi.gairi** | Shaking mountain; place of spirit struggle |
✅ Echo:
* **Gāθā → ritual speech**
* **Zairi-Gairi → trembling mountain**
→ Agares: *“Hymnic speech master who shakes the firmament”*
---
## 9️⃣ **Ancient Chinese (Shang-Zhou)**
| Root | Meaning |
| ------------ | ----------------------------------------- |
| **教 (jiào)** | To teach, instruct |
| **震 (zhèn)** | Thunder, quake — symbol of divine command |
| **行 (xíng)** | Movement, journey |
| **逐 (zhú)** | To chase out, banish — exilic force |
| **靈 (líng)** | Spirit-force or supernatural ability |
✅ Cross-mapping:
* **震教 (zhèn jiào)** = “quake-teaching”
* **行靈 (xíng líng)** = “moving spirit”
→ *Agares = spirit-force of shaking who teaches the way*
---
## 🔟 **Proto-Indo-European (PIE)**
| Reconstructed Root | Meaning |
| ------------------ | ------------------------------------------------------ |
| ***ag-*** | To drive, move, go (→ Latin *ago*, Greek *agein*) |
| ***gar- / gher-*** | Enclose, grasp, gather (→ *garden*, *gird*, *guard*) |
| ***gh(e)u̯bh-*** | To bend, bow, shake (→ quake) |
| ***dhegʷh-*** | Earth, ground — root of “earthquake” via Latin *terra* |
✅ Composite:
* **ag- + gar- → “He who gathers and drives”**
* **ghubh → tremble, quake**
→ *Agares = “Driving gatherer who causes shaking”*
---
# 🧬 SUMMARY — ROOTS OF AGARES ACROSS CIVILIZATIONS
| Culture | Root Name(s) | Meaning / Function |
| ------------- | ----------------------- | ------------------------------------------------ |
| **Sumerian** | E₂.GAR, GIR | To go, establish, march — exile and return |
| **Akkadian** | egēru, agirû, garāmu | To quake, messenger, expel |
| **Canaanite** | ʾgr, gr, grr | To hire, exile, drag back |
| **Hebrew** | gār, raʿash, lamad | Stranger, quake, teach |
| **Egyptian** | Sobek, Set, Heka | Crocodile deity, chaos god, magic of speech |
| **Hittite** | Garku-, Iyarri | Shouting, disease, earth-rage |
| **Sanskrit** | Agara, Gacchati, Guru | Dwelling, movement, teacher |
| **Avestan** | Gāθā, Zairi-Gairi | Ritual speech, shaking holy mountain |
| **Chinese** | 教, 震, 逐, 靈 | Teach, quake, banish, spirit-force |
| **PIE** | *ag-*, *gar-*, *ghubh-* | Move, gather, shake — “one who drives the quake” |
---
# 🜏 FINAL VERDICT:
✅ **AGARES** is a **composite archetype** of the *civilizing earthquake* — a **liminal lord who teaches language to the lost, shakes the boundaries of nations, and commands both return and exile**.
He embodies the mythic tension between:
* **Command and Collapse**
* **Teaching and Trembling**
* **Territorial Power and Displacement**
His name once meant:
> **"The Gatherer Who Shakes, The Teacher Who Returns."**
---
Would you like this formatted into a **visual mytho-linguistic map** or exported into structured modules for all 72 spirits?
|
mradermacher/openthoughts_100k_32B-GGUF | mradermacher | 2025-05-03T00:29:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:mlfoundations-dev/openthoughts_100k_32B",
"base_model:quantized:mlfoundations-dev/openthoughts_100k_32B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T22:56:35Z | ---
base_model: mlfoundations-dev/openthoughts_100k_32B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlfoundations-dev/openthoughts_100k_32B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openthoughts_100k_32B-GGUF/resolve/main/openthoughts_100k_32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama3.1-Aloe-Beta-8B-GGUF | mradermacher | 2025-05-03T00:24:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"biology",
"medical",
"healthcare",
"en",
"dataset:HPAI-BSC/Aloe-Beta-General-Collection",
"dataset:HPAI-BSC/chain-of-diagnosis",
"dataset:HPAI-BSC/MedS-Ins",
"dataset:HPAI-BSC/ultramedical",
"dataset:HPAI-BSC/pubmedqa-cot-llama31",
"dataset:HPAI-BSC/medqa-cot-llama31",
"dataset:HPAI-BSC/medmcqa-cot-llama31",
"dataset:HPAI-BSC/headqa-cot-llama31",
"dataset:HPAI-BSC/MMLU-medical-cot-llama31",
"dataset:HPAI-BSC/Polymed-QA",
"base_model:HPAI-BSC/Llama3.1-Aloe-Beta-8B",
"base_model:quantized:HPAI-BSC/Llama3.1-Aloe-Beta-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T23:16:32Z | ---
base_model: HPAI-BSC/Llama3.1-Aloe-Beta-8B
datasets:
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/chain-of-diagnosis
- HPAI-BSC/MedS-Ins
- HPAI-BSC/ultramedical
- HPAI-BSC/pubmedqa-cot-llama31
- HPAI-BSC/medqa-cot-llama31
- HPAI-BSC/medmcqa-cot-llama31
- HPAI-BSC/headqa-cot-llama31
- HPAI-BSC/MMLU-medical-cot-llama31
- HPAI-BSC/Polymed-QA
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/Aloe-Beta-General-Collection
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- biology
- medical
- healthcare
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-Aloe-Beta-8B-GGUF/resolve/main/Llama3.1-Aloe-Beta-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/Erland_-_llama31-gguf | RichardErkhov | 2025-05-03T00:23:45Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T22:22:58Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama31 - GGUF
- Model creator: https://huggingface.co/Erland/
- Original model: https://huggingface.co/Erland/llama31/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama31.Q2_K.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama31.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama31.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama31.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama31.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama31.Q3_K.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama31.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama31.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama31.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama31.Q4_0.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama31.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama31.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [llama31.Q4_K.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q4_K.gguf) | Q4_K | 4.58GB |
| [llama31.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama31.Q4_1.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama31.Q5_0.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama31.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama31.Q5_K.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama31.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama31.Q5_1.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama31.Q6_K.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama31.Q8_0.gguf](https://huggingface.co/RichardErkhov/Erland_-_llama31-gguf/blob/main/llama31.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Erland
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jaysaints06/emotion-classifier-distilbert | jaysaints06 | 2025-05-03T00:21:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T00:20:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AdoCleanCode/real_model_VGG_v1_025 | AdoCleanCode | 2025-05-03T00:19:43Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T21:28:16Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: real_model_VGG_v1_025
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# real_model_VGG_v1_025
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6684 | 1.0 | 4443 | 1.5083 |
| 1.5158 | 2.0 | 8886 | 1.4409 |
| 1.4364 | 3.0 | 13329 | 1.4108 |
| 1.395 | 4.0 | 17772 | 1.3943 |
| 1.3772 | 5.0 | 22215 | 1.3899 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
aleegis/12bfec75-51dc-4b57-a9de-48be93433ef5 | aleegis | 2025-05-03T00:18:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-05-02T22:42:38Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 12bfec75-51dc-4b57-a9de-48be93433ef5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- f66a75cfdf9b5976_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f66a75cfdf9b5976_train_data.json
type:
field_input: context
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/12bfec75-51dc-4b57-a9de-48be93433ef5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/f66a75cfdf9b5976_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 12bfec75-51dc-4b57-a9de-48be93433ef5
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis/61b9e75c-c611-4c76-9673-143f759cabab | aleegis | 2025-05-03T00:18:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-05-02T22:42:37Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 61b9e75c-c611-4c76-9673-143f759cabab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- f66a75cfdf9b5976_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f66a75cfdf9b5976_train_data.json
type:
field_input: context
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/61b9e75c-c611-4c76-9673-143f759cabab
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/f66a75cfdf9b5976_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 61b9e75c-c611-4c76-9673-143f759cabab
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Marco0/zob | Marco0 | 2025-05-03T00:16:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T00:11:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JAMhunggingface/jamethiopia1 | JAMhunggingface | 2025-05-03T00:11:42Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"biology",
"finance",
"music",
"zero-shot-classification",
"am",
"dataset:nvidia/OpenCodeReasoning",
"base_model:nari-labs/Dia-1.6B",
"base_model:adapter:nari-labs/Dia-1.6B",
"license:apache-2.0",
"region:us"
] | zero-shot-classification | 2025-05-03T00:10:25Z | ---
license: apache-2.0
datasets:
- nvidia/OpenCodeReasoning
language:
- am
metrics:
- accuracy
base_model:
- nari-labs/Dia-1.6B
new_version: nari-labs/Dia-1.6B
pipeline_tag: zero-shot-classification
library_name: adapter-transformers
tags:
- biology
- finance
- music
--- |
mradermacher/model_requests | mradermacher | 2025-05-03T00:09:20Z | 0 | 90 | null | [
"en",
"region:us"
] | null | 2024-03-03T11:11:09Z | ---
language:
- en
---
# To request a quant, open an new discussion in the Community tab (if possible with the full url somewhere in the title *AND* body)
**You can search models, compare and download quants at https://hf.tst.eu/**
**You can see the current quant status at https://hf.tst.eu/status.html**
# Mini-FAQ
## I miss model XXX
First of all, I am not the only one to make quants. For example, **Lewdiculous** makes high-quality imatrix quants of many
small models *and has a great presentation*. I either don't bother with imatrix quants for small models (< 30B), or avoid them
because I saw others already did them, avoiding double work.
Some other notable people which do quants are **Nexesenex**, **bartowski**, **RichardErkhov**, **dranger003** and **Artefact2**.
I'm not saying anything about the quality of their quants, because I probably forgot some really good folks in this list,
and I wouldn't even know, anyways.
Model creators also often provide their own quants.
As always, feel free to request a quant, even if somebody else already did one, or request an imatrix version
for models where I didn't provide them.
## My community discussion is missing
Most likely you brought up problems with the model and I decided I either have to re-do or simply drop the quants.
In the past, I renamed the model (so you can see my reply), but the huggingface rename function is borked and leaves the files
available under their old name, keeping me from regenerating them (because my scripts can see them already existing).
The only fix seems to be to delete the repo, which unfortunately also deletes the community discussion.
## I miss quant type XXX
The quant types I currently do regularly are:
- static: (f16) Q8_0 Q4_K_S Q2_K Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS (Q4_0_4)
- imatrix: Q2_K Q4_K_S IQ3_XXS Q3_K_M (IQ4_NL) Q4_K_M IQ2_M Q6_K IQ4_XS Q3_K_S Q3_K_L Q5_K_S Q5_K_M Q4_0 IQ3_XS IQ3_S IQ3_M IQ2_XXS IQ2_XS IQ2_S IQ1_M IQ1_S (Q4_0_4_4 Q4_0_4_8 Q4_0_8_8)
And they are generally (but not always) generated in the order above, for which there are deep reasons.
For models less than 11B size, I experimentally generate f16 versions at the moment (in the static repository).
For models less than 19B size, imatrix IQ4_NL quants will be generated, mostly for the benefit of arm,
where it can give a speed benefit.
The (static) IQ3 quants are no longer generated, as they consistently seem to result in *much* lower quality
quants than even static Q2_K, so it would be s disservice to offer them. *Update*: That might no longer be true, and they might come back.
I specifically do not do Q2_K_S, because I generally think it is not worth it (IQ2_M usually being smaller and better, albeit slower),
and IQ4_NL, because it requires a lot of computing and is generally completely superseded by IQ4_XS.
Q8_0 imatrix quants do not exist - some quanters claim otherwise, but Q8_0 ggufs do not contain any tensor
type that uses the imatrix data, although technically it might be possible to do so.
Older models that pre-date introduction of new quant types generally will have them retrofitted on request.
You can always try to change my mind about all this, but be prepared to bring convincing data.
## What does the "-i1" mean in "-i1-GGUF"?
"mradermacher imatrix type 1"
Originally, I had the idea of using an iterational method of imatrix generation, and wanted to see how well it
fares. That is, create an imatrix from a bad quant (e.g. static Q2_K), then use the new model to generate a
possibly better imatrix. It never happened, but I think sticking to something, even if slightly wrong, is better
changing it. If I make considerable changes to how I create imatrix data I will probably bump it to `-i2` and so on.
since there is some subjectivity/choice in imatrix training data, this also distinguishes it from
quants by other people who made different choices.
## What is the imatrix training data you use, can I have a copy?
My training data consists of about 160k tokens, about half of which is semi-random tokens (sentence fragments)
taken from stories, the other half is kalomaze's groups_merged.txt and a few other things. I have a half and a quarter
set for too big or too stubborn models.
Neither my set nor kalomaze's data contain large amounts of non-english training data, which is why I tend to
not generate imatrix quants for models primarily meant for non-english usage. This is a trade-off, emphasizing
english over other languages. But from (sparse) testing data it looks as if this doesn't actually make a big
difference. More data are always welcome.
Unfortunately, I do not have the rights to publish the testing data, but I might be able to replicate an
equivalent set in the future and publish that.
## Why are you doing this?
Because at some point, I found that some new interesting models weren't available as GGUF anymore - my go-to
source, TheBloke, had vanished. So I quantized a few models for myself. At the time, it was trivial - no imatrix,
only a few quant types, all them very fast to generate.
I then looked into huggingface more closely than just as a download source, and decided uploading would be a
good thing, so others don't have to redo the work on their own. I'm used to sharing most of the things I make
(mostly in free software), so it felt naturally to contribute, even at a minor scale.
Then the number of quant types and their computational complexity exploded, as well as imatrix calculations became a thing.
This increased the time required to make such quants by an order of magnitude. And also the management overhead.
Since I was slowly improving my tooling I grew into it at the same pace as these innovations came out. I probably
would not have started doing this a month later, as I would have been daunted by the complexity and work required.
## You have amazing hardware!?!?!
I regularly see people write that, but I probably have worse hardware than them to create my quants. I currently
have access to eight servers that have good upload speed. Five of them are xeon quad cores class from ~2013, three are
Ryzen 5 hexacores. The faster the server, the smaller the diskspace they have, so I can't just put the big
models on the fast(er) servers.
Imatrix generation is done on my home/work/gaming computer, which received an upgrade to 96GB DDR5 RAM, and
originally had an RTX 4070 (now, again, upgraded to a 4090 due to a generous investment of the company I work for).
I have good download speeds, but bad upload speeds at home, so it's lucky that model downloads are big and imatrix
uploads are small.
## How do you create imatrix files for really big models?
Through a combination of these ingenuous tricks:
1. I am not above using a low quant (e.g. Q4_K_S, IQ3_XS or even Q2_K), reducing the size of the model.
2. An nvme drive is "only" 25-50 times slower than RAM. I lock the first 80GB of the model in RAM, and
then stream the remaining data from disk for every iteration.
3. Patience.
The few evaluations I have suggests that this gives good quality, and my current set-up allows me to
generate imatrix data for most models in fp16, 70B in Q8_0 and almost everything else in Q4_K_S.
The trick to 3 is not actually having patience, the trick is to automate things to the point where you
don't have to wait for things normally. For example, if all goes well, quantizing a model requires just
a single command (or less) for static quants, and for imatrix quants I need to select the source gguf
and then run another command which handles download/computation/upload. Most of the time, I only have
to do stuff when things go wrong (which, with llama.cpp being so buggy and hard to use,
is unfortunately very frequent).
## What do I need to do to compute imatrix files for large models?
Use [`llama-imatrix`](https://github.com/ggml-org/llama.cpp/blob/master/examples/imatrix/README.md) to compute imatrix files.
### Hardware
* RAM: A lot of RAM is required to compute imatrix files. Example: 512 GB is just enough to compute 405B imatrix quants in Q8.
* GPU: At least 8 GB of memory.
### Dataset
* You want to create a dataset that is around double the size of bartowski1182's imatrix dataset. Quality is far more important
than size. If you don't mind long training times, you can make it massive, but if you go beyond 1 MB there will
probably be diminishing returns.
* Your imatrix dataset should contain the typical output the model would generate when used for the workload you plan on using
the model for. If you plan on using the model as a programming assistant, your imatrix dataset should contain the typical code
you would ask it to write. The same applies for language. Our dataset is mostly English. If one would use our imatrix models in
a different language they will likely perform worse than static quants as only a very small portion of our imatrix training data
is multilingual. We only have the resources to generate single generic imatrix quants so our imatrix dataset must contain examples
of every common use-case of an LLM.
### Extra tips
* Computing 405B imatrix quants in Q8 does not seem to have any noticeable quality impact compared to BF16, so to save on hardware
requirements, use Q8.
* Sometimes, a single node may not have enough RAM to compute the imatrix file. In such cases, `llama-rpc` inside llama.cpp can
be used to combine the RAM/VRAM of multiple nodes. This approach takes longer: computing the 405B imatrix file in BF16 takes
around 20 hours using 3 nodes with 512 GB, 256 GB, and 128 GB of RAM, compared to 4 hours for Q8 on a single node.
## Why don't you use gguf-split?
TL;DR: I don't have the hardware/resources for that.
Long answer: gguf-split requires a full copy for every quant.
Unlike what many people think, my hardware is rather outdated and not very fast. The extra processing that gguf-split requires
either runs out of space on my systems with fast disk, or takes a very long time and a lot of I/O bandwidth on the slower
disks, all of which already run at their limits. Supporting gguf-split would mean
While this is the blocking reason, I also find it less than ideal that yet another incompatible file format was created that
requires special tools to manage, instead of supporting the tens of thousands of existing quants, of which the vast majority
could just be mmapped together into memory from split files. That doesn't keep me from supporting it, but it would have
been nice to look at the existing reality and/or consult the community before throwing yet another hard to support format out
there without thinking.
There are some developments to make this less of a pain, and I will revisit this issue from time to time to see if it has
become feasible.
Update 2024-07: llama.cpp probably has most of the features needed to make this reality, but I haven't found time to test and implement it yet.
Update 2024-09: just looked at implementing it, and no, the problems that keep me from doing it are still there :(. Must have fantasized it!!?
## So who is mradermacher?
Nobody has asked this, but since there are people who really deserve mention, I'll put this here. "mradermacher" is just a
pseudonymous throwaway account I created to goof around, but then started to quant models. A few months later, @nicoboss joined
and contributed hardware, power and general support - practically all imatrix computatuions are done on his computer(s).
Then @Guilherme34 started to help getting access to models, and @RichardErkhov first gave us the wondrous
FATLLAMA-1.7T, followed by access to his server to quant more models, likely to atone for his sins.
So you should consider "mradermacher" to be the team name for a fictional character called Michael Radermacher.
There are no connections ot anything else on the internet, other than an mradermacher_hf account on reddit.
|
JoshMe1/d3f352aa-451b-4d06-834d-093363dfecc1 | JoshMe1 | 2025-05-02T23:51:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-05-02T22:42:38Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d3f352aa-451b-4d06-834d-093363dfecc1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f66a75cfdf9b5976_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f66a75cfdf9b5976_train_data.json
type:
field_input: context
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: JoshMe1/d3f352aa-451b-4d06-834d-093363dfecc1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/f66a75cfdf9b5976_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# d3f352aa-451b-4d06-834d-093363dfecc1
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 1.0523 |
| 0.0002 | 0.0432 | 100 | 0.0005 |
| 0.0013 | 0.0864 | 200 | 0.0002 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
saramoncayon/sol | saramoncayon | 2025-05-02T23:49:36Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T23:36:40Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sol
---
# Sol
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sol ` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "sol ",
"lora_weights": "https://huggingface.co/saramoncayon/sol/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('saramoncayon/sol', weight_name='lora.safetensors')
image = pipeline('sol ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/saramoncayon/sol/discussions) to add images that show off what you’ve made with this LoRA.
|
fats-fme/dbed360c-d31a-41b8-a639-f7200e835194 | fats-fme | 2025-05-02T23:49:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T22:02:35Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dbed360c-d31a-41b8-a639-f7200e835194
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 21c49dc937709928_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/21c49dc937709928_train_data.json
type:
field_instruction: en
field_output: fr
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/dbed360c-d31a-41b8-a639-f7200e835194
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/21c49dc937709928_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3817e1a8-ed6c-45eb-9aef-fd65e3afe80f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3817e1a8-ed6c-45eb-9aef-fd65e3afe80f
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# dbed360c-d31a-41b8-a639-f7200e835194
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.9527 |
| 1.3482 | 0.0008 | 100 | 1.4766 |
| 1.3165 | 0.0017 | 200 | 1.3586 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DevQuasar/microsoft.MAI-DS-R1-GGUF | DevQuasar | 2025-05-02T23:47:29Z | 646 | 0 | null | [
"gguf",
"text-generation",
"base_model:microsoft/MAI-DS-R1",
"base_model:quantized:microsoft/MAI-DS-R1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-26T05:03:12Z | ---
base_model:
- microsoft/MAI-DS-R1
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [microsoft/MAI-DS-R1](https://huggingface.co/microsoft/MAI-DS-R1)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
shibajustfor/525865d3-7ace-4f9f-ad49-73114c4b07bd | shibajustfor | 2025-05-02T23:36:39Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-05-02T23:36:07Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Korabbit/llama-2-ko-7b
model-index:
- name: shibajustfor/525865d3-7ace-4f9f-ad49-73114c4b07bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/525865d3-7ace-4f9f-ad49-73114c4b07bd
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
cwaud/3218bdd7-24fe-48a8-bdcc-a18831328e5c | cwaud | 2025-05-02T23:36:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T23:32:48Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3218bdd7-24fe-48a8-bdcc-a18831328e5c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: /workspace/axolotl/data_prepared
datasets:
- data_files:
- e1230b33949f9bdf_train_data.json
ds_type: json
format: custom
path: /workspace/axolotl/data
type:
field_instruction: question
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: cwaud/3218bdd7-24fe-48a8-bdcc-a18831328e5c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /workspace/axolotl/data/e1230b33949f9bdf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0ace46bc-8f88-4e70-95b9-9502b5a4d1dc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0ace46bc-8f88-4e70-95b9-9502b5a4d1dc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3218bdd7-24fe-48a8-bdcc-a18831328e5c
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3664 | 0.0002 | 1 | 1.7174 |
| 1.5623 | 0.0007 | 3 | 1.7129 |
| 1.5257 | 0.0014 | 6 | 1.6821 |
| 1.526 | 0.0021 | 9 | 1.6293 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Prady309/Yest | Prady309 | 2025-05-02T23:27:15Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T23:27:15Z | ---
license: apache-2.0
---
|
cwaud/3ca86da9-e878-46e1-aa4e-61c84dcaf6a0 | cwaud | 2025-05-02T23:25:57Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen1.5-7B-Chat",
"base_model:finetune:Qwen/Qwen1.5-7B-Chat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T23:21:55Z | ---
base_model: Qwen/Qwen1.5-7B-Chat
library_name: transformers
model_name: 3ca86da9-e878-46e1-aa4e-61c84dcaf6a0
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 3ca86da9-e878-46e1-aa4e-61c84dcaf6a0
This model is a fine-tuned version of [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cwaud/3ca86da9-e878-46e1-aa4e-61c84dcaf6a0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alicegoesdown56-goesdown/Gradients-On-Demand/runs/kzz1q0c1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.2
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AdoCleanCode/real_model_VGG_v4_080 | AdoCleanCode | 2025-05-02T23:21:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T21:30:00Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: real_model_VGG_v4_080
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# real_model_VGG_v4_080
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.717 | 1.0 | 3997 | 1.5246 |
| 1.5348 | 2.0 | 7994 | 1.4575 |
| 1.4649 | 3.0 | 11991 | 1.4236 |
| 1.4026 | 4.0 | 15988 | 1.4129 |
| 1.3783 | 5.0 | 19985 | 1.4067 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
aleegis/2db2336a-b7b9-4427-a93d-3cd19612a495 | aleegis | 2025-05-02T23:21:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T22:23:02Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2db2336a-b7b9-4427-a93d-3cd19612a495
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 9532c4c65a822af6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9532c4c65a822af6_train_data.json
type:
field_instruction: problem
field_output: reasoning_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/2db2336a-b7b9-4427-a93d-3cd19612a495
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/9532c4c65a822af6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: e81a8ee6-474d-4598-a6bc-fe8020a6cbf5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e81a8ee6-474d-4598-a6bc-fe8020a6cbf5
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 2db2336a-b7b9-4427-a93d-3cd19612a495
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jahyungu/Qwen2.5-7B-Instruct_MetaMathQA-40K_cluster9 | jahyungu | 2025-05-02T23:19:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T19:08:06Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct_MetaMathQA-40K_cluster9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct_MetaMathQA-40K_cluster9
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
shubhamprshr/Qwen2.5-1.5B-Instruct_aqua_sgrpo_gaussian_0.25_0.75_True_300 | shubhamprshr | 2025-05-02T23:18:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:gsm8k-dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T15:17:05Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: gsm8k-dataset
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct_aqua_sgrpo_gaussian_0.25_0.75_True_300
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct_aqua_sgrpo_gaussian_0.25_0.75_True_300
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [gsm8k-dataset](https://huggingface.co/datasets/gsm8k-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-1.5B-Instruct_aqua_sgrpo_gaussian_0.25_0.75_True_300", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/AQUA/runs/c14cnaz9)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
joboffer/fcdf6c55-db5b-4808-a1c1-f27496fca5d2 | joboffer | 2025-05-02T23:15:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T23:08:23Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fcdf6c55-db5b-4808-a1c1-f27496fca5d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 888ea6ef3e598d0f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/888ea6ef3e598d0f_train_data.json
type:
field_instruction: instruction
field_output: chosen_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/fcdf6c55-db5b-4808-a1c1-f27496fca5d2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/888ea6ef3e598d0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9ae77362-cb2f-435e-9d23-b7c4ecd44858
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 9ae77362-cb2f-435e-9d23-b7c4ecd44858
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fcdf6c55-db5b-4808-a1c1-f27496fca5d2
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7585 | 0.1082 | 200 | 0.4627 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bruhzair/ignore-merge-6 | bruhzair | 2025-05-02T23:13:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T22:42:58Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# eva2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
modules:
default:
slices:
- sources:
- layer_range: [0, 4]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [2, 4]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [4, 8]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [6, 8]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 12]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [10, 12]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [12, 16]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [14, 16]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [16, 20]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [18, 20]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [20, 24]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [22, 24]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 28]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [26, 28]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [28, 32]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [30, 32]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [32, 36]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [34, 36]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [36, 40]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [38, 40]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [40, 44]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [42, 44]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [44, 48]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [46, 48]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [48, 52]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [50, 52]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [52, 56]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [54, 56]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [56, 60]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [58, 60]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [60, 64]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [62, 64]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [64, 68]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [66, 68]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [68, 72]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [70, 72]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [72, 76]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [74, 76]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [76, 80]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
- sources:
- layer_range: [78, 80]
model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
```
|
infogep/7b6b1399-7b51-4a1e-865d-c156dac30ac8 | infogep | 2025-05-02T23:08:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:53:13Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7b6b1399-7b51-4a1e-865d-c156dac30ac8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 888ea6ef3e598d0f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/888ea6ef3e598d0f_train_data.json
type:
field_instruction: instruction
field_output: chosen_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: infogep/7b6b1399-7b51-4a1e-865d-c156dac30ac8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/888ea6ef3e598d0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9ae77362-cb2f-435e-9d23-b7c4ecd44858
wandb_project: s56-30
wandb_run: your_name
wandb_runid: 9ae77362-cb2f-435e-9d23-b7c4ecd44858
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7b6b1399-7b51-4a1e-865d-c156dac30ac8
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.76 | 0.1082 | 200 | 0.4630 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nexesenex/Llama_3.x_70b_Genelemo-UnfusedV06_fusion_v2 | Nexesenex | 2025-05-02T23:07:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:TareksTesting/MO-MODEL-Fused-V0.6-LLaMa-70B",
"base_model:merge:TareksTesting/MO-MODEL-Fused-V0.6-LLaMa-70B",
"base_model:zerofata/L3.3-GeneticLemonade-Unleashed-70B",
"base_model:merge:zerofata/L3.3-GeneticLemonade-Unleashed-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T19:31:16Z | ---
base_model:
- zerofata/L3.3-GeneticLemonade-Unleashed-70B
- TareksTesting/MO-MODEL-Fused-V0.6-LLaMa-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Arcee Fusion](https://arcee.ai) merge method using [zerofata/L3.3-GeneticLemonade-Unleashed-70B](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-70B) as a base.
### Models Merged
The following models were included in the merge:
* [TareksTesting/MO-MODEL-Fused-V0.6-LLaMa-70B](https://huggingface.co/TareksTesting/MO-MODEL-Fused-V0.6-LLaMa-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: arcee_fusion
models:
- model: zerofata/L3.3-GeneticLemonade-Unleashed-70B
- model: TareksTesting/MO-MODEL-Fused-V0.6-LLaMa-70B
base_model: zerofata/L3.3-GeneticLemonade-Unleashed-70B
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
chat_template: auto
tokenizer:
source: union
```
|
sergioalves/f4e45daf-e234-40bf-8403-4f511ae3b2b8 | sergioalves | 2025-05-02T23:03:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:01:34Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4e45daf-e234-40bf-8403-4f511ae3b2b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0bc216a74e5223ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0bc216a74e5223ea_train_data.json
type:
field_input: system_prompt
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/f4e45daf-e234-40bf-8403-4f511ae3b2b8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0bc216a74e5223ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 54f8b968-ef35-4e16-a7c3-fbecb65048c8
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 54f8b968-ef35-4e16-a7c3-fbecb65048c8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f4e45daf-e234-40bf-8403-4f511ae3b2b8
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9763 | 0.0085 | 200 | 0.9407 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yeebwn/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF | yeebwn | 2025-05-02T23:02:02Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B",
"base_model:quantized:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T23:01:54Z | ---
base_model: naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B
license: other
license_name: hyperclovax-seed
license_link: LICENSE
tags:
- llama-cpp
- gguf-my-repo
---
# yeebwn/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF
This model was converted to GGUF format from [`naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B`](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo yeebwn/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo yeebwn/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-1.5b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo yeebwn/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo yeebwn/HyperCLOVAX-SEED-Text-Instruct-1.5B-Q4_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-1.5b-q4_k_m.gguf -c 2048
```
|
lisabdunlap/Llama-3.1-8B-Instruct-unsloth-bnb-4bit-r32-e20-lr0.0002-json_format_small-new | lisabdunlap | 2025-05-02T23:00:05Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T23:00:04Z | ---
base_model: unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chchen/Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold7 | chchen | 2025-05-02T22:55:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:adapter:aaditya/Llama3-OpenBioLLM-8B",
"license:llama3",
"region:us"
] | null | 2025-05-02T21:34:05Z | ---
library_name: peft
license: llama3
base_model: aaditya/Llama3-OpenBioLLM-8B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold7
This model is a fine-tuned version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) on the course-doc-info-train-fold7 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2622 | 0.3951 | 10 | 0.2341 |
| 0.1266 | 0.7901 | 20 | 0.1238 |
| 0.1058 | 1.1852 | 30 | 0.0889 |
| 0.0751 | 1.5802 | 40 | 0.0722 |
| 0.0674 | 1.9753 | 50 | 0.0624 |
| 0.0526 | 2.3704 | 60 | 0.0578 |
| 0.055 | 2.7654 | 70 | 0.0550 |
| 0.0604 | 3.1605 | 80 | 0.0524 |
| 0.058 | 3.5556 | 90 | 0.0512 |
| 0.0424 | 3.9506 | 100 | 0.0503 |
| 0.0433 | 4.3457 | 110 | 0.0506 |
| 0.0502 | 4.7407 | 120 | 0.0501 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
kokovova/48a3c1c6-d1f2-4303-b260-370351fdda2b | kokovova | 2025-05-02T22:52:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:42:51Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 48a3c1c6-d1f2-4303-b260-370351fdda2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- f66a75cfdf9b5976_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f66a75cfdf9b5976_train_data.json
type:
field_input: context
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/48a3c1c6-d1f2-4303-b260-370351fdda2b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/f66a75cfdf9b5976_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
wandb_project: s56-4
wandb_run: your_name
wandb_runid: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 48a3c1c6-d1f2-4303-b260-370351fdda2b
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0376 | 0.0432 | 200 | 0.0612 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
infogeo/7bdeca18-f270-448d-8d60-cfce7714b2f6 | infogeo | 2025-05-02T22:51:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:43:36Z | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7bdeca18-f270-448d-8d60-cfce7714b2f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- f66a75cfdf9b5976_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f66a75cfdf9b5976_train_data.json
type:
field_input: context
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/7bdeca18-f270-448d-8d60-cfce7714b2f6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/f66a75cfdf9b5976_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
wandb_project: s56-28
wandb_run: your_name
wandb_runid: ff35e43d-a365-4fe6-8c3a-d553a9ab26ed
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7bdeca18-f270-448d-8d60-cfce7714b2f6
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7433 | 0.0324 | 150 | 1.0546 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik2987/16b10c95-a962-44d2-af42-a3cbe6a3ded7 | dimasik2987 | 2025-05-02T22:50:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:20:01Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 16b10c95-a962-44d2-af42-a3cbe6a3ded7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/codellama-7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 1e342bbeaf894e58_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1e342bbeaf894e58_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik2987/16b10c95-a962-44d2-af42-a3cbe6a3ded7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 12
mixed_precision: bf16
mlflow_experiment_name: /tmp/1e342bbeaf894e58_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1bc31bc4-0adf-49b9-bb84-67aba32775dc
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 1bc31bc4-0adf-49b9-bb84-67aba32775dc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 16b10c95-a962-44d2-af42-a3cbe6a3ded7
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5197 | 0.0481 | 200 | 0.5196 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rogerscuall/gemma-2-2B-it-thinking-function_calling-V0 | rogerscuall | 2025-05-02T22:50:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T21:57:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
infogep/33a296e5-a896-49ca-a43f-12d1b83d4974 | infogep | 2025-05-02T22:48:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:02:34Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 33a296e5-a896-49ca-a43f-12d1b83d4974
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 21c49dc937709928_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/21c49dc937709928_train_data.json
type:
field_instruction: en
field_output: fr
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: infogep/33a296e5-a896-49ca-a43f-12d1b83d4974
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/21c49dc937709928_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3817e1a8-ed6c-45eb-9aef-fd65e3afe80f
wandb_project: s56-30
wandb_run: your_name
wandb_runid: 3817e1a8-ed6c-45eb-9aef-fd65e3afe80f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 33a296e5-a896-49ca-a43f-12d1b83d4974
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4763 | 0.0017 | 200 | 1.9969 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Oceans-ID/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-smooth_lethal_buffalo | Oceans-ID | 2025-05-02T22:44:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am smooth lethal buffalo",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit",
"base_model:finetune:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T04:59:01Z | ---
base_model: Gensyn/Qwen2.5-32B-Instruct-bnb-4bit
library_name: transformers
model_name: Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-smooth_lethal_buffalo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am smooth lethal buffalo
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-smooth_lethal_buffalo
This model is a fine-tuned version of [Gensyn/Qwen2.5-32B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-32B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Oceans-ID/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-smooth_lethal_buffalo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SodaXII/dinov2-small_rice-leaf-disease-augmented-v4_v5_fft | SodaXII | 2025-05-02T22:43:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/dinov2-small",
"base_model:finetune:facebook/dinov2-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-02T19:39:54Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/dinov2-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dinov2-small_rice-leaf-disease-augmented-v4_v5_fft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-small_rice-leaf-disease-augmented-v4_v5_fft
This model is a fine-tuned version of [facebook/dinov2-small](https://huggingface.co/facebook/dinov2-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3174
- Accuracy: 0.9463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 256
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5071 | 0.5 | 64 | 0.6205 | 0.7852 |
| 0.4009 | 1.0 | 128 | 0.3635 | 0.8792 |
| 0.209 | 1.5 | 192 | 0.3144 | 0.8859 |
| 0.2231 | 2.0 | 256 | 0.2716 | 0.9128 |
| 0.1661 | 2.5 | 320 | 0.3476 | 0.8691 |
| 0.1308 | 3.0 | 384 | 0.2279 | 0.9195 |
| 0.067 | 3.5 | 448 | 0.3845 | 0.9195 |
| 0.063 | 4.0 | 512 | 0.3661 | 0.9027 |
| 0.0215 | 4.5 | 576 | 0.3287 | 0.9228 |
| 0.0148 | 5.0 | 640 | 0.2952 | 0.9329 |
| 0.0007 | 5.5 | 704 | 0.3063 | 0.9463 |
| 0.0002 | 6.0 | 768 | 0.2855 | 0.9396 |
| 0.0 | 6.5 | 832 | 0.2888 | 0.9396 |
| 0.0 | 7.0 | 896 | 0.2766 | 0.9463 |
| 0.0 | 7.5 | 960 | 0.2879 | 0.9497 |
| 0.0 | 8.0 | 1024 | 0.2960 | 0.9463 |
| 0.0 | 8.5 | 1088 | 0.2906 | 0.9463 |
| 0.0 | 9.0 | 1152 | 0.2920 | 0.9463 |
| 0.0 | 9.5 | 1216 | 0.2932 | 0.9463 |
| 0.0 | 10.0 | 1280 | 0.2921 | 0.9463 |
| 0.0 | 10.5 | 1344 | 0.2922 | 0.9463 |
| 0.0 | 11.0 | 1408 | 0.2924 | 0.9463 |
| 0.0 | 11.5 | 1472 | 0.2919 | 0.9497 |
| 0.0 | 12.0 | 1536 | 0.2925 | 0.9463 |
| 0.0 | 12.5 | 1600 | 0.2943 | 0.9463 |
| 0.0 | 13.0 | 1664 | 0.2969 | 0.9463 |
| 0.0 | 13.5 | 1728 | 0.2982 | 0.9430 |
| 0.0 | 14.0 | 1792 | 0.2977 | 0.9463 |
| 0.0 | 14.5 | 1856 | 0.2981 | 0.9463 |
| 0.0 | 15.0 | 1920 | 0.2980 | 0.9463 |
| 0.0 | 15.5 | 1984 | 0.2980 | 0.9463 |
| 0.0 | 16.0 | 2048 | 0.2982 | 0.9463 |
| 0.0 | 16.5 | 2112 | 0.2998 | 0.9463 |
| 0.0 | 17.0 | 2176 | 0.3035 | 0.9430 |
| 0.0 | 17.5 | 2240 | 0.3039 | 0.9463 |
| 0.0 | 18.0 | 2304 | 0.3029 | 0.9463 |
| 0.0 | 18.5 | 2368 | 0.3044 | 0.9430 |
| 0.0 | 19.0 | 2432 | 0.3046 | 0.9430 |
| 0.0 | 19.5 | 2496 | 0.3046 | 0.9430 |
| 0.0 | 20.0 | 2560 | 0.3047 | 0.9430 |
| 0.0 | 20.5 | 2624 | 0.3047 | 0.9430 |
| 0.0 | 21.0 | 2688 | 0.3074 | 0.9430 |
| 0.0 | 21.5 | 2752 | 0.3086 | 0.9430 |
| 0.0 | 22.0 | 2816 | 0.3083 | 0.9430 |
| 0.0 | 22.5 | 2880 | 0.3088 | 0.9430 |
| 0.0 | 23.0 | 2944 | 0.3103 | 0.9463 |
| 0.0 | 23.5 | 3008 | 0.3109 | 0.9463 |
| 0.0 | 24.0 | 3072 | 0.3107 | 0.9463 |
| 0.0 | 24.5 | 3136 | 0.3108 | 0.9463 |
| 0.0 | 25.0 | 3200 | 0.3109 | 0.9463 |
| 0.0 | 25.5 | 3264 | 0.3101 | 0.9463 |
| 0.0 | 26.0 | 3328 | 0.3133 | 0.9463 |
| 0.0 | 26.5 | 3392 | 0.3125 | 0.9497 |
| 0.0 | 27.0 | 3456 | 0.3163 | 0.9463 |
| 0.0 | 27.5 | 3520 | 0.3172 | 0.9463 |
| 0.0 | 28.0 | 3584 | 0.3166 | 0.9463 |
| 0.0 | 28.5 | 3648 | 0.3176 | 0.9463 |
| 0.0 | 29.0 | 3712 | 0.3175 | 0.9463 |
| 0.0 | 29.5 | 3776 | 0.3174 | 0.9463 |
| 0.0 | 30.0 | 3840 | 0.3174 | 0.9463 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
RedHatAI/Qwen3-0.6B-FP8_dynamic | RedHatAI | 2025-05-02T22:43:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"FP8",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | text-generation | 2025-05-02T16:57:26Z | ---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-0.6B
tags:
- neuralmagic
- redhat
- llmcompressor
- quantized
- FP8
---
# Qwen3-0.6B-FP8-dynamic
## Model Overview
- **Model Architecture:** Qwen3ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Intended Use Cases:**
- Reasoning.
- Function calling.
- Subject matter experts via fine-tuning.
- Multilingual instruction following.
- Translation.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Release Date:** 05/02/2025
- **Version:** 1.0
- **Model Developers:** RedHat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing activations and weights of [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/Qwen3-0.6B-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=20, min_p=0, max_tokens=256)
messages = [
{"role": "user", "content": prompt}
]
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "Qwen/Qwen3-0.6B"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(model_stub)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["lm_head"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
The model was evaluated on the OpenLLM leaderboard tasks (version 1), using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [vLLM](https://docs.vllm.ai/en/stable/).
<details>
<summary>Evaluation details</summary>
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Qwen3-0.6B-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks openllm \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
</details>
### Accuracy
<table>
<tr>
<th>Category
</th>
<th>Benchmark
</th>
<th>Qwen3-0.6B
</th>
<th>Qwen3-0.6B-FP8-dynamic<br>(this model)
</th>
<th>Recovery
</th>
</tr>
<tr>
<td rowspan="7" ><strong>OpenLLM v1</strong>
</td>
<td>MMLU (5-shot)
</td>
<td>42.82
</td>
<td>42.32
</td>
<td>98.8%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>32.85
</td>
<td>37.07
</td>
<td>112.9%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>1.82
</td>
<td>0.83
</td>
<td>---
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>43.04
</td>
<td>43.12
</td>
<td>100.2%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>54.54
</td>
<td>52.33
</td>
<td>96.0%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>51.61
</td>
<td>51.23
</td>
<td>99.3%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>37.78</strong>
</td>
<td><strong>37.82</strong>
</td>
<td><strong>100.1%</strong>
</td>
</tr>
</table> |
RedHatAI/Qwen3-1.7B-FP8_dynamic | RedHatAI | 2025-05-02T22:40:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"FP8",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | text-generation | 2025-05-02T20:04:44Z | ---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-1.7B
tags:
- neuralmagic
- redhat
- llmcompressor
- quantized
- FP8
---
# Qwen3-1.7B-FP8-dynamic
## Model Overview
- **Model Architecture:** Qwen3ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Intended Use Cases:**
- Reasoning.
- Function calling.
- Subject matter experts via fine-tuning.
- Multilingual instruction following.
- Translation.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Release Date:** 05/02/2025
- **Version:** 1.0
- **Model Developers:** RedHat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing activations and weights of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/Qwen3-1.7B-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=20, min_p=0, max_tokens=256)
messages = [
{"role": "user", "content": prompt}
]
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "Qwen/Qwen3-1.7B"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(model_stub)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["lm_head"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
The model was evaluated on the OpenLLM leaderboard tasks (version 1), using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [vLLM](https://docs.vllm.ai/en/stable/).
<details>
<summary>Evaluation details</summary>
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Qwen3-1.7B-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks openllm \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
</details>
### Accuracy
<table>
<tr>
<th>Category
</th>
<th>Benchmark
</th>
<th>Qwen3-1.7B
</th>
<th>Qwen3-1.7B-FP8-dynamic<br>(this model)
</th>
<th>Recovery
</th>
</tr>
<tr>
<td rowspan="7" ><strong>OpenLLM v1</strong>
</td>
<td>MMLU (5-shot)
</td>
<td>56.82
</td>
<td>56.02
</td>
<td>98.6%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>43.00
</td>
<td>42.83
</td>
<td>99.6%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>43.67
</td>
<td>41.47
</td>
<td>95.0%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>48.08
</td>
<td>48.11
</td>
<td>100.1%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>58.01
</td>
<td>57.70
</td>
<td>99.5%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>49.35
</td>
<td>48.60
</td>
<td>98.5%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>49.82</strong>
</td>
<td><strong>49.12</strong>
</td>
<td><strong>98.6%</strong>
</td>
</tr>
</table> |
Mrigank005/Rubric_Generator | Mrigank005 | 2025-05-02T22:40:38Z | 0 | 0 | null | [
"rubric-generation",
"education",
"fine-tuned",
"text-generation",
"gpt",
"en",
"dataset:custom",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:mit",
"region:us"
] | text-generation | 2025-05-02T22:32:09Z | ---
language: en
license: mit
tags:
- rubric-generation
- education
- fine-tuned
- text-generation
- gpt
datasets:
- custom
widget:
- text: >-
Question: What are the benefits of regular exercise?
Sample Answer: Regular exercise helps in weight management, improves
cardiovascular health, and enhances mental well-being.
Total Marks: 5
base_model:
- meta-llama/Llama-2-7b-chat-hf
---
# 📝 Rubric Generator Model
This model is fine-tuned to **generate detailed grading rubrics** when provided with:
- a **question or prompt**
- a **sample answer**
- the **maximum marks** for the question
It is designed for educators, examiners, and educational apps that require structured, point-wise rubrics for evaluating subjective answers.
---
## 📌 Model Details
- **Architecture**: Causal language model (e.g., GPT-style)
- **Training Format**: Supervised fine-tuning on question-answer-mark-rubric datasets
- **Input Format**:
```plaintext
Question: <your-question>
Sample Answer: <your-answer>
Total Marks: <max-marks>
````
* **Output**: A rubric in JSON format assigning marks to specific answer criteria
---
## 🚀 Example
**Input:**
```
Question: What are the advantages of using solar energy?
Sample Answer: Solar energy is renewable and reduces electricity bills. It's environmentally friendly and reduces reliance on fossil fuels.
Total Marks: 5
```
**Output:**
```json
{
"rubric": [
{ "criteria": "Mentions that solar energy is renewable", "max_marks": 1 },
{ "criteria": "Discusses cost-saving or reduction in electricity bills", "max_marks": 1 },
{ "criteria": "Highlights environmental friendliness", "max_marks": 1 },
{ "criteria": "Mentions reduced reliance on fossil fuels", "max_marks": 1 },
{ "criteria": "Answer clarity and overall relevance", "max_marks": 1 }
]
}
```
---
## 📚 Training Data
The model was fine-tuned on a dataset of:
* Questions
* Sample answers
* Total marks
* Expert-designed rubrics in structured JSON format
This dataset is included in the accompanying [GitHub repository](https://github.com/yourusername/rubric-generator).
---
## 🧠 Intended Use
* Automated rubric generation for educational platforms
* Consistent scoring guidelines for subjective assessments
* Feedback generation tools for students and teachers
---
## ⚠️ Limitations
* May not be optimized for highly domain-specific or creative writing assessments
* Requires sample answers to be reasonably well-formed to generate useful rubrics
* Rubric quality depends on clarity of the question and sample answer
---
## 📄 License
This model is licensed under the [MIT License](LICENSE).
---
**Author**: Mrigank Singh
**Contact**: [email protected]
**Repository**: [GitHub - rubric-generator](https://github.com/Mrigank005/Rubric_Generator)
``` |
JEFFERSONMUSIC/MJBeatItGuitar40K | JEFFERSONMUSIC | 2025-05-02T22:40:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T22:19:02Z | ---
license: apache-2.0
---
|
Ramwest/Ramwest | Ramwest | 2025-05-02T22:38:29Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T22:38:25Z | ---
license: apache-2.0
---
|
bruhzair/ignore-merge-5 | bruhzair | 2025-05-02T22:38:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T22:05:29Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# doppel2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
modules:
default:
slices:
- sources:
- layer_range: [0, 4]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [2, 4]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [4, 8]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [6, 8]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 12]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [10, 12]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [12, 16]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [14, 16]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [16, 20]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [18, 20]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [20, 24]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [22, 24]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 28]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [26, 28]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [28, 32]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [30, 32]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [32, 36]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [34, 36]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [36, 40]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [38, 40]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [40, 44]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [42, 44]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [44, 48]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [46, 48]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [48, 52]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [50, 52]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [52, 56]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [54, 56]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [56, 60]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [58, 60]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [60, 64]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [62, 64]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [64, 68]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [66, 68]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [68, 72]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [70, 72]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [72, 76]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [74, 76]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [76, 80]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
- sources:
- layer_range: [78, 80]
model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
```
|
allura-org/GLM4-32B-Neon-v2 | allura-org | 2025-05-02T22:37:22Z | 60 | 5 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"conversational",
"en",
"dataset:allura-org/Celeste-Filtered",
"dataset:allura-org/neon-41k",
"dataset:EVA-UNIT-01/Lilith-v0.2",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:finetune:THUDM/GLM-4-32B-0414",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T16:17:57Z | ---
license: mit
datasets:
- allura-org/Celeste-Filtered
- allura-org/neon-41k
- EVA-UNIT-01/Lilith-v0.2
language:
- en
base_model:
- THUDM/GLM-4-32B-0414
library_name: transformers
---
<img src="image_28.png">
<small>Image by CalamitousFelicitousness</small>
---
# GLM-4-32B-0414 Neon v2
RP finetune of GLM-4-32B-0414. Feels nice, lots of personality, lots of variety, if bit quirky sometimes. Pretty smart, but sometimes plays dumb for a swipe, just let it be itself. Nice prose, not too Claude-ish or Gemini-ish. Bit of structural repetitions happen sometimes, but that's how modern LLMs are so ¯\\_(ツ)_/¯. Seems to like JSON formatted system prompts.
Model was trained by Auri.
---
**Training notes**
Model was trained on a dataset consisting of 77M tokens of synthetic RP and short story gen data for one epoch. Training took around 28 hours on 4xRTX 3090 workstation, generously provided by [OwenArli](https://huggingface.co/OwenArli). Went with some sane defaults for training config, QLoRA plus CCE and sequence parallelism allowed to fit in 16k fit on 96GB. It overall trained smoother than 9B. I still have the issue with NaN Eval/Loss, still not sure of the reason why.
Huge thanks to [ArliAI](https://www.arliai.com/) for providing compute and collaborating on this run!
**Format**
Model responds to GLM4 instruct formatting, exactly like it's base model. Backends struggle to add BOS token automatically, so you'll need to do it yourself. Jinja template should work for chat completions.
```
[gMASK]<sop><|system|>
{system_prompt}<|user|>
{prompt}<|assistant|>
```
**Recommended Samplers**
Nothing special, just classics.
```
Temperature - 1
Min-P - 0.1
Repetition Penalty - 1.03
```
[Example master import for SillyTavern (using Shingane-v1 system prompt by Steelskull)](https://huggingface.co/allura-org/GLM4-9B-Neon-v2/blob/main/GLM-Shingane-v1.json)
**Running on KoboldCPP and other backends**
To run GGUFs correctly, you need the most recent version of KoboldCPP, and to pass `--overridekv glm4.rope.dimension_count=int:64` to the CLI command or put `glm4.rope.dimension_count=int:64` into overridekv box in the GUI (under the Tokens tab at the very bottom).
Thanks to DaringDuck and tofumagnate for info how to apply this fix.
~~To run this model on vLLM, you'll need to build it from source from the git repo, full GLM4 support hasn't reached release yet.~~ Should work OOTB on vLLM >=0.8.5.
ExLLaMAv2 currently doesn't properly support GLM-4-32B, unlike 9B. EXL3 should work, but it's untested.
Latest versions of llama.cpp server should also allow running GGUFs out-of-the-box.
---
**Special Thanks**
Once again, huge kudos to OwenArli for providing compute and helping with tuning along the way!
Big thanks to Artus for providing free inference for pre-release showcase of this model!
And big thanks to BeaverAI community for giving feedback and helping to figure out optimal settings!
---
**Training config**
<details><summary>See Axolotl config</summary>
```yaml
# Model
base_model: /home/owen/models/GLM-4-32B-0414
strict: false
model_type: AutoModelForCausalLM
# Liger Kernels and CCE (optimization)
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: false
liger_rms_norm: false
liger_glu_activation: false
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
# Output and HuggingFace
output_dir: ./GLM-32B-Neon-v2
hub_model_id: AuriAetherwiing/GLM-32B-Neon-v2-LoRA
hf_use_auth_token: true
hub_strategy: "all_checkpoints"
# WandB
wandb_project: allura-org
wandb_entity:
wandb_name: GLM-32B-Neon-v2
# Data
#chat_template: chatml
#train_on_inputs: false
group_by_length: false
datasets:
- path: ./Neon/neon.jsonl
type: chat_template
field_messages: conversations
message_field_role: from
message_field_content: value
train_on_eos: all
- path: ./Neon/S2.jsonl
type: chat_template
field_messages: conversations
message_field_role: from
message_field_content: value
train_on_eos: all
- path: ./Neon/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl
type: chat_template
field_messages: conversations
message_field_role: from
message_field_content: value
train_on_eos: all
dataset_prepared_path: ./lora_last_run_prepared
chat_template: jinja
chat_template_jinja: |
[gMASK]<sop>{%- for msg in messages %}{%- if msg.role == 'system' %}<|system|>
{{ msg.content }}{%- elif msg.role == 'user' %}<|user|>
{{ msg.content }}{%- elif msg.role == 'assistant' %}<|assistant|>
{{ msg.content }}{%- endif %}{%- endfor %}{% if add_generation_prompt %}<|assistant|>{% endif %}
## Evaluation
val_set_size: 0.005
evals_per_epoch: 8
eval_table_size:
eval_max_new_tokens: 128
# Technical aspects
sequence_len: 16384
save_safetensors: true
saves_per_epoch: 4
logging_steps: 1
#special_tokens:
# pad_token: <pad>
# Quantization
bf16: auto
fp16:
tf32: false
## For LoRA
load_in_8bit: false
load_in_4bit: true
# LoRA
peft_use_rslora: false
peft_use_dora: false # better but slower
adapter: qlora # lora or qlora
lora_model_dir:
lora_r: 64 # 64 is optimal for most trains on instruct
lora_alpha: 64
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
# loraplus_lr_ratio: 8 # works to converge faster but is kinda cancer bc makes model unstable
#loraplus_lr_embedding:
# Training hyperparameters
# max_steps:
num_epochs: 1
# Anti Overfit and Stability
weight_decay: 0.01
max_grad_norm: 1.0
## Learning Rate
warmup_ratio: 0.05
learning_rate: 1e-5
lr_scheduler: rex
#lr_scheduler_kwargs:
# min_lr: 0.0000024
optimizer: adamw_torch # usually adamw_torch or paged_adamw_8bit
## Batch Size
gradient_accumulation_steps: 32 # More effective batch size - stabler train, usually. MBS also speeds it up.
micro_batch_size: 1 # Batch size per gpu = micro_batch_size * gradient_accumulation_steps
eval_batch_size: 1
# Optimizations
pad_to_sequence_len: true
sample_packing: true
eval_sample_packing: false
flash_attention: true
xformers_attention:
gradient_checkpointing:
gradient_checkpointing_kwargs:
use_reentrant: false
# Set to a divisor (> 1) of the number of GPUs available
sequence_parallel_degree: 4 # Split sequences across 4 GPUs
# Optional; strides across the key dimension. Larger values use more memory but should make training faster.
heads_k_stride: 1
# Optional; one of "varlen_llama3", "batch_ring", "batch_zigzag", "batch_stripe". Defaults to
# "varlen_llama3" when `sample_packing: true`, and "batch_ring" otherwise.
ring_attn_func:
# deepspeed: /home/owen/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_all.json
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: Glm4DecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
fsdp_activation_checkpointing: true
```
</details> |
muhamedhaniix/autotrain-c4pv9-c7knu | muhamedhaniix | 2025-05-02T22:36:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T22:34:48Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.9081246852874756
f1_macro: 0.544011544011544
f1_micro: 0.5833333333333334
f1_weighted: 0.544011544011544
precision_macro: 0.5793650793650793
precision_micro: 0.5833333333333334
precision_weighted: 0.5793650793650793
recall_macro: 0.5833333333333334
recall_micro: 0.5833333333333334
recall_weighted: 0.5833333333333334
accuracy: 0.5833333333333334
|
threefruits/Qwen2.5-VL-path-selection | threefruits | 2025-05-02T22:36:20Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:threefruits/SCAND_traj_selection",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-10T06:10:47Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
datasets: threefruits/SCAND_traj_selection
library_name: transformers
model_name: Qwen2.5-VL-path-selection
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-VL-path-selection
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the [threefruits/SCAND_traj_selection](https://huggingface.co/datasets/threefruits/SCAND_traj_selection) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="threefruits/Qwen2.5-VL-path-selection", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.51.1
- Pytorch: 2.3.0+cu121
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RedHatAI/Qwen3-8B-FP8_dynamic | RedHatAI | 2025-05-02T22:29:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"FP8",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | text-generation | 2025-05-02T17:03:36Z | ---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B
tags:
- neuralmagic
- redhat
- llmcompressor
- quantized
- FP8
---
# Qwen3-8B-FP8-dynamic
## Model Overview
- **Model Architecture:** Qwen3ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Intended Use Cases:**
- Reasoning.
- Function calling.
- Subject matter experts via fine-tuning.
- Multilingual instruction following.
- Translation.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Release Date:** 05/02/2025
- **Version:** 1.0
- **Model Developers:** RedHat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing activations and weights of [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/Qwen3-8B-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=20, min_p=0, max_tokens=256)
messages = [
{"role": "user", "content": prompt}
]
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "Qwen/Qwen3-8B"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(model_stub)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["lm_head"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
The model was evaluated on the OpenLLM leaderboard tasks (version 1), using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [vLLM](https://docs.vllm.ai/en/stable/).
<details>
<summary>Evaluation details</summary>
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Qwen3-8B-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=1 \
--tasks openllm \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
</details>
### Accuracy
<table>
<tr>
<th>Category
</th>
<th>Benchmark
</th>
<th>Qwen3-8B
</th>
<th>Qwen3-8B-FP8-dynamic<br>(this model)
</th>
<th>Recovery
</th>
</tr>
<tr>
<td rowspan="7" ><strong>OpenLLM v1</strong>
</td>
<td>MMLU (5-shot)
</td>
<td>71.95
</td>
<td>72.30
</td>
<td>100.5%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>61.69
</td>
<td>61.60
</td>
<td>99.9%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>75.97
</td>
<td>80.52
</td>
<td>106.0%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>56.52
</td>
<td>55.95
</td>
<td>99.0%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>65.98
</td>
<td>66.22
</td>
<td>100.4%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>53.17
</td>
<td>52.39
</td>
<td>98.5%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>64.21</strong>
</td>
<td><strong>64.83</strong>
</td>
<td><strong>101.0%</strong>
</td>
</tr>
</table> |
infogeo/1d5be871-2ce2-4633-95a0-03939ef26591 | infogeo | 2025-05-02T22:28:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:20:32Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1d5be871-2ce2-4633-95a0-03939ef26591
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/codellama-7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 1e342bbeaf894e58_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1e342bbeaf894e58_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/1d5be871-2ce2-4633-95a0-03939ef26591
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/1e342bbeaf894e58_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1bc31bc4-0adf-49b9-bb84-67aba32775dc
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 1bc31bc4-0adf-49b9-bb84-67aba32775dc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1d5be871-2ce2-4633-95a0-03939ef26591
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5623 | 0.0120 | 150 | 0.6291 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik1987/47f3f28f-ccdb-4cdd-8f35-d35b144e7feb | dimasik1987 | 2025-05-02T22:26:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:02:40Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 47f3f28f-ccdb-4cdd-8f35-d35b144e7feb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0bc216a74e5223ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0bc216a74e5223ea_train_data.json
type:
field_input: system_prompt
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik1987/47f3f28f-ccdb-4cdd-8f35-d35b144e7feb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/0bc216a74e5223ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 54f8b968-ef35-4e16-a7c3-fbecb65048c8
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 54f8b968-ef35-4e16-a7c3-fbecb65048c8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 47f3f28f-ccdb-4cdd-8f35-d35b144e7feb
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0301 | 0.0079 | 150 | 1.0776 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ergvervge/Pala.dzinolda.na.dc.nic.nie.trzeba.robic | ergvervge | 2025-05-02T22:23:45Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T22:21:14Z | <a href="https://everyvlogger.com/erfefreg"> 🌐 Click Here To link (Original.Pała.dzinolda.na.dc.nic.nie.trzebarobić.video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://everyvlogger.com/erfefreg"> 🌐 Original.Pała.dzinolda.na.dc.nic.nie.trzebarobić.video |
carozum/results_qlora_mistral | carozum | 2025-05-02T22:16:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T22:15:56Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- generated_from_trainer
model-index:
- name: results_qlora_mistral
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_qlora_mistral
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.188 | 1.3524 | 20 | 1.0871 |
| 0.8581 | 2.7048 | 40 | 0.8865 |
| 0.7169 | 4.0 | 60 | 0.8250 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
infogeo/9fe78110-7b92-4985-95fa-12bd31dfe79b | infogeo | 2025-05-02T22:16:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T22:02:09Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9fe78110-7b92-4985-95fa-12bd31dfe79b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0bc216a74e5223ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0bc216a74e5223ea_train_data.json
type:
field_input: system_prompt
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/9fe78110-7b92-4985-95fa-12bd31dfe79b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0bc216a74e5223ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 54f8b968-ef35-4e16-a7c3-fbecb65048c8
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 54f8b968-ef35-4e16-a7c3-fbecb65048c8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9fe78110-7b92-4985-95fa-12bd31dfe79b
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0022 | 0.0063 | 150 | 1.0919 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ericwang07/blip-gqa-ft-trial2 | ericwang07 | 2025-05-02T22:13:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"blip-2",
"visual-question-answering",
"generated_from_trainer",
"base_model:Salesforce/blip2-opt-2.7b",
"base_model:finetune:Salesforce/blip2-opt-2.7b",
"license:mit",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2025-05-02T21:44:25Z | ---
library_name: transformers
license: mit
base_model: Salesforce/blip2-opt-2.7b
tags:
- generated_from_trainer
model-index:
- name: blip-gqa-ft-trial2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blip-gqa-ft-trial2
This model is a fine-tuned version of [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 0.25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7655 | 0.2496 | 78 | 2.5522 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Atnafu/eng_amh_unnormalized-nllb_600M_eng2geez-un | Atnafu | 2025-05-02T22:11:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-02T22:03:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Meditron3-Qwen2.5-7B-GGUF | mradermacher | 2025-05-02T22:05:18Z | 166 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:OpenMeditron/Meditron3-Qwen2.5-7B",
"base_model:quantized:OpenMeditron/Meditron3-Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-01T15:12:45Z | ---
base_model: OpenMeditron/Meditron3-Qwen2.5-7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/OpenMeditron/Meditron3-Qwen2.5-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meditron3-Qwen2.5-7B-GGUF/resolve/main/Meditron3-Qwen2.5-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ysn-rfd/pushed_to_hub_ysnrfd | ysn-rfd | 2025-05-02T22:03:13Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T22:02:30Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ysn-rfd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/mkia-prod-dxi2ter4ns | phospho-app | 2025-05-02T22:01:19Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-02T21:45:30Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [PLB/mkia-prod](https://huggingface.co/datasets/PLB/mkia-prod)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
aneoooosarqe/25465 | aneoooosarqe | 2025-05-02T22:00:57Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T22:00:57Z | ---
license: apache-2.0
---
|
sofiyan3053/keywordindex | sofiyan3053 | 2025-05-02T22:00:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T21:53:59Z | title: Similarity
emoji: 🐠
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 5.28.0
app_file: app.py
pinned: false |
wztwzt/bert-headline-classifier | wztwzt | 2025-05-02T21:59:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T21:57:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chchen/MentaLLaMA-chat-7B-PsyCourse-info-fold9 | chchen | 2025-05-02T21:54:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:klyang/MentaLLaMA-chat-7B-hf",
"base_model:adapter:klyang/MentaLLaMA-chat-7B-hf",
"license:mit",
"region:us"
] | null | 2025-05-02T20:46:44Z | ---
library_name: peft
license: mit
base_model: klyang/MentaLLaMA-chat-7B-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: MentaLLaMA-chat-7B-PsyCourse-info-fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MentaLLaMA-chat-7B-PsyCourse-info-fold9
This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-info-train-fold9 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7417 | 0.3951 | 10 | 0.6984 |
| 0.3034 | 0.7901 | 20 | 0.3035 |
| 0.2155 | 1.1852 | 30 | 0.2237 |
| 0.154 | 1.5802 | 40 | 0.1793 |
| 0.1459 | 1.9753 | 50 | 0.1624 |
| 0.1307 | 2.3704 | 60 | 0.1564 |
| 0.118 | 2.7654 | 70 | 0.1481 |
| 0.1127 | 3.1605 | 80 | 0.1424 |
| 0.0948 | 3.5556 | 90 | 0.1389 |
| 0.1181 | 3.9506 | 100 | 0.1387 |
| 0.0853 | 4.3457 | 110 | 0.1377 |
| 0.0862 | 4.7407 | 120 | 0.1375 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
rogerscuall/gemma-2-2B-it-thinking-function_calling-V0-wsl-2025-05-02_20.25.49 | rogerscuall | 2025-05-02T21:54:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T20:26:03Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0-wsl-2025-05-02_20.25.49
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0-wsl-2025-05-02_20.25.49
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rogerscuall/gemma-2-2B-it-thinking-function_calling-V0-wsl-2025-05-02_20.25.49", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rogerscuall-presidio/gemma-2-2B-it-thinking-function_calling-V0-wsl/runs/r3mohvs0)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AnomalaPictures/aine2 | AnomalaPictures | 2025-05-02T21:54:29Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T21:29:03Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AINN
---
# Aine2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AINN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AINN",
"lora_weights": "https://huggingface.co/AnomalaPictures/aine2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AnomalaPictures/aine2', weight_name='lora.safetensors')
image = pipeline('AINN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 18
## Contribute your own examples
You can use the [community tab](https://huggingface.co/AnomalaPictures/aine2/discussions) to add images that show off what you’ve made with this LoRA.
|
niklasm222/qwen2.5-3b-grpo-1.75k-MMLU-STEM-sp-mmlu-rwd1-NEW | niklasm222 | 2025-05-02T21:46:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T21:44:42Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
caiosms/laura_rosto | caiosms | 2025-05-02T21:44:25Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-05-02T21:44:21Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/ComfyUI_1361.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Laura
license: mit
---
# Laura Rosto
<Gallery />
## Trigger words
You should use `Laura` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/caiosms/laura_rosto/tree/main) them in the Files & versions tab.
|
prithivMLmods/WASP-2B-VL-Highlights | prithivMLmods | 2025-05-02T21:38:11Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"feature-extraction",
"Generation",
"OCR",
"KIE",
"Highlights-Generator",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-09T02:52:20Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2-VL-2B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- Generation
- OCR
- KIE
- Highlights-Generator
---

# **WASP-2B-VL-Highlights**
> \[!Note]
> The **WASP-2B-VL-Highlights** model is a fine-tuned version of *Qwen2-VL-2B-Instruct*, specifically optimized for **image highlights extraction**, **messy handwriting recognition**, **Optical Character Recognition (OCR)**, **English language understanding**, and **math problem solving with LaTeX formatting**. This model uses a conversational visual-language interface to effectively handle multi-modal tasks.
[](https://colab.research.google.com/#fileId=https%3A//huggingface.co/prithivMLmods/WASP-2B-VL-Highlights/blob/main/Callisto_OCR3_2B_Instruct.ipynb)
# **Key Enhancements:**
* **State-of-the-art image comprehension** across varying resolutions and aspect ratios:
WASP-2B-VL-Highlights delivers top-tier performance on benchmarks such as MathVista, DocVQA, RealWorldQA, and MTVQA.
* **Image Highlighting Expertise**:
Specially tuned to **identify and summarize key visual elements** in an image — ideal for **creating visual highlights**, annotations, and summaries.
* **Handwriting OCR Enhanced**:
Recognizes **messy and complex handwritten notes** with precision, perfect for digitizing real-world documents.
* **Video Content Understanding**:
Capable of processing videos longer than 20 minutes for **context-aware Q\&A, transcription**, and **highlight extraction**.
* **Multi-device Integration**:
Can be used as an intelligent agent for mobile phones, robots, and other devices — able to **understand visual scenes and execute actions**.
* **Multilingual OCR Support**:
In addition to English and Chinese, supports OCR for European languages, Japanese, Korean, Arabic, and Vietnamese.
# **Run with Transformers🤗**
```py
%%capture
!pip install -q gradio spaces transformers accelerate
!pip install -q numpy requests torch torchvision
!pip install -q qwen-vl-utils av ipython reportlab
!pip install -q fpdf python-docx pillow huggingface_hub
```
```py
#Demo
import gradio as gr
import spaces
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor, TextIteratorStreamer
from qwen_vl_utils import process_vision_info
import torch
from PIL import Image
import os
import uuid
import io
from threading import Thread
from reportlab.lib.pagesizes import A4
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib import colors
from reportlab.platypus import SimpleDocTemplate, Image as RLImage, Paragraph, Spacer
from reportlab.lib.units import inch
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
import docx
from docx.enum.text import WD_ALIGN_PARAGRAPH
# Define model options
MODEL_OPTIONS = {
"Needle-2B-VL-Highlights": "prithivMLmods/WASP-2B-VL-Highlights",
}
# Preload models and processors into CUDA
models = {}
processors = {}
for name, model_id in MODEL_OPTIONS.items():
print(f"Loading {name}...")
models[name] = Qwen2VLForConditionalGeneration.from_pretrained(
model_id,
trust_remote_code=True,
torch_dtype=torch.float16
).to("cuda").eval()
processors[name] = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
image_extensions = Image.registered_extensions()
def identify_and_save_blob(blob_path):
"""Identifies if the blob is an image and saves it."""
try:
with open(blob_path, 'rb') as file:
blob_content = file.read()
try:
Image.open(io.BytesIO(blob_content)).verify() # Check if it's a valid image
extension = ".png" # Default to PNG for saving
media_type = "image"
except (IOError, SyntaxError):
raise ValueError("Unsupported media type. Please upload a valid image.")
filename = f"temp_{uuid.uuid4()}_media{extension}"
with open(filename, "wb") as f:
f.write(blob_content)
return filename, media_type
except FileNotFoundError:
raise ValueError(f"The file {blob_path} was not found.")
except Exception as e:
raise ValueError(f"An error occurred while processing the file: {e}")
@spaces.GPU
def qwen_inference(model_name, media_input, text_input=None):
"""Handles inference for the selected model."""
model = models[model_name]
processor = processors[model_name]
if isinstance(media_input, str):
media_path = media_input
if media_path.endswith(tuple([i for i in image_extensions.keys()])):
media_type = "image"
else:
try:
media_path, media_type = identify_and_save_blob(media_input)
except Exception as e:
raise ValueError("Unsupported media type. Please upload a valid image.")
messages = [
{
"role": "user",
"content": [
{
"type": media_type,
media_type: media_path
},
{"type": "text", "text": text_input},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, _ = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
streamer = TextIteratorStreamer(
processor.tokenizer, skip_prompt=True, skip_special_tokens=True
)
generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=1024)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
buffer = ""
for new_text in streamer:
buffer += new_text
# Remove <|im_end|> or similar tokens from the output
buffer = buffer.replace("<|im_end|>", "")
yield buffer
def format_plain_text(output_text):
"""Formats the output text as plain text without LaTeX delimiters."""
# Remove LaTeX delimiters and convert to plain text
plain_text = output_text.replace("\\(", "").replace("\\)", "").replace("\\[", "").replace("\\]", "")
return plain_text
def generate_document(media_path, output_text, file_format, font_size, line_spacing, alignment, image_size):
"""Generates a document with the input image and plain text output."""
plain_text = format_plain_text(output_text)
if file_format == "pdf":
return generate_pdf(media_path, plain_text, font_size, line_spacing, alignment, image_size)
elif file_format == "docx":
return generate_docx(media_path, plain_text, font_size, line_spacing, alignment, image_size)
def generate_pdf(media_path, plain_text, font_size, line_spacing, alignment, image_size):
"""Generates a PDF document."""
filename = f"output_{uuid.uuid4()}.pdf"
doc = SimpleDocTemplate(
filename,
pagesize=A4,
rightMargin=inch,
leftMargin=inch,
topMargin=inch,
bottomMargin=inch
)
styles = getSampleStyleSheet()
styles["Normal"].fontSize = int(font_size)
styles["Normal"].leading = int(font_size) * line_spacing
styles["Normal"].alignment = {
"Left": 0,
"Center": 1,
"Right": 2,
"Justified": 4
}[alignment]
story = []
# Add image with size adjustment
image_sizes = {
"Small": (200, 200),
"Medium": (400, 400),
"Large": (600, 600)
}
img = RLImage(media_path, width=image_sizes[image_size][0], height=image_sizes[image_size][1])
story.append(img)
story.append(Spacer(1, 12))
# Add plain text output
text = Paragraph(plain_text, styles["Normal"])
story.append(text)
doc.build(story)
return filename
def generate_docx(media_path, plain_text, font_size, line_spacing, alignment, image_size):
"""Generates a DOCX document."""
filename = f"output_{uuid.uuid4()}.docx"
doc = docx.Document()
# Add image with size adjustment
image_sizes = {
"Small": docx.shared.Inches(2),
"Medium": docx.shared.Inches(4),
"Large": docx.shared.Inches(6)
}
doc.add_picture(media_path, width=image_sizes[image_size])
doc.add_paragraph()
# Add plain text output
paragraph = doc.add_paragraph()
paragraph.paragraph_format.line_spacing = line_spacing
paragraph.paragraph_format.alignment = {
"Left": WD_ALIGN_PARAGRAPH.LEFT,
"Center": WD_ALIGN_PARAGRAPH.CENTER,
"Right": WD_ALIGN_PARAGRAPH.RIGHT,
"Justified": WD_ALIGN_PARAGRAPH.JUSTIFY
}[alignment]
run = paragraph.add_run(plain_text)
run.font.size = docx.shared.Pt(int(font_size))
doc.save(filename)
return filename
# CSS for output styling
css = """
#output {
height: 500px;
overflow: auto;
border: 1px solid #ccc;
}
.submit-btn {
background-color: #cf3434 !important;
color: white !important;
}
.submit-btn:hover {
background-color: #ff2323 !important;
}
.download-btn {
background-color: #35a6d6 !important;
color: white !important;
}
.download-btn:hover {
background-color: #22bcff !important;
}
"""
# Gradio app setup
with gr.Blocks(css=css) as demo:
gr.Markdown("# Qwen2VL Models: Vision and Language Processing")
with gr.Tab(label="Image Input"):
with gr.Row():
with gr.Column():
model_choice = gr.Dropdown(
label="Model Selection",
choices=list(MODEL_OPTIONS.keys()),
value="WASP-2B-VL-Highlights"
)
input_media = gr.File(
label="Upload Image", type="filepath"
)
text_input = gr.Textbox(label="Question", placeholder="Ask a question about the image...")
submit_btn = gr.Button(value="Submit", elem_classes="submit-btn")
with gr.Column():
output_text = gr.Textbox(label="Output Text", lines=10)
plain_text_output = gr.Textbox(label="Standardized Plain Text", lines=10)
submit_btn.click(
qwen_inference, [model_choice, input_media, text_input], [output_text]
).then(
lambda output_text: format_plain_text(output_text), [output_text], [plain_text_output]
)
# Add examples directly usable by clicking
with gr.Row():
with gr.Column():
line_spacing = gr.Dropdown(
choices=[0.5, 1.0, 1.15, 1.5, 2.0, 2.5, 3.0],
value=1.5,
label="Line Spacing"
)
font_size = gr.Dropdown(
choices=["8", "10", "12", "14", "16", "18", "20", "22", "24"],
value="18",
label="Font Size"
)
alignment = gr.Dropdown(
choices=["Left", "Center", "Right", "Justified"],
value="Justified",
label="Text Alignment"
)
image_size = gr.Dropdown(
choices=["Small", "Medium", "Large"],
value="Small",
label="Image Size"
)
file_format = gr.Radio(["pdf", "docx"], label="File Format", value="pdf")
get_document_btn = gr.Button(value="Get Document", elem_classes="download-btn")
get_document_btn.click(
generate_document, [input_media, output_text, file_format, font_size, line_spacing, alignment, image_size], gr.File(label="Download Document")
)
demo.launch(debug=True)
```
# **Demo Output with ReportLab**

# **Key Features**
1. **Visual Highlights Generator:**
- Extracts **key objects, regions, and contextual clues** from images and turns them into meaningful **visual summaries**.
2. **Advanced Handwriting OCR:**
- Excels at recognizing and transcribing **messy or cursive handwriting** into digital text.
3. **Vision-Language Fusion:**
- Seamlessly integrates **visual input** with **language reasoning**, ideal for image captioning, description, and Q&A.
4. **Math and LaTeX Support:**
- Understands math problems in visual/text format and outputs in **LaTeX syntax**.
5. **Conversational AI:**
- Supports **multi-turn dialogue** with memory of prior input — highly useful for interactive problem-solving and explanations.
6. **Multi-modal Input Capability:**
- Accepts **image, text, or a combination**, and generates intelligent output tailored to the input. |
chchen/Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold6 | chchen | 2025-05-02T21:33:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:adapter:aaditya/Llama3-OpenBioLLM-8B",
"license:llama3",
"region:us"
] | null | 2025-05-02T19:49:33Z | ---
library_name: peft
license: llama3
base_model: aaditya/Llama3-OpenBioLLM-8B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3-OpenBioLLM-8B-PsyCourse-doc-info-fold6
This model is a fine-tuned version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) on the course-doc-info-train-fold6 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2565 | 0.3951 | 10 | 0.2473 |
| 0.1517 | 0.7901 | 20 | 0.1349 |
| 0.1035 | 1.1852 | 30 | 0.0973 |
| 0.0832 | 1.5802 | 40 | 0.0745 |
| 0.0682 | 1.9753 | 50 | 0.0648 |
| 0.0573 | 2.3704 | 60 | 0.0584 |
| 0.0587 | 2.7654 | 70 | 0.0566 |
| 0.0482 | 3.1605 | 80 | 0.0541 |
| 0.0567 | 3.5556 | 90 | 0.0549 |
| 0.0441 | 3.9506 | 100 | 0.0523 |
| 0.0487 | 4.3457 | 110 | 0.0520 |
| 0.0487 | 4.7407 | 120 | 0.0519 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Atnafu/nllb_600M_eng2geez-un | Atnafu | 2025-05-02T21:32:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-02T21:28:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VridhiJain/roberta_vanilla | VridhiJain | 2025-05-02T21:31:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T21:31:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Kimina-Autoformalizer-7B-RL-GGUF | mradermacher | 2025-05-02T21:30:53Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Jianyuan1/Kimina-Autoformalizer-7B-RL",
"base_model:quantized:Jianyuan1/Kimina-Autoformalizer-7B-RL",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T20:48:37Z | ---
base_model: Jianyuan1/Kimina-Autoformalizer-7B-RL
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Jianyuan1/Kimina-Autoformalizer-7B-RL
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kimina-Autoformalizer-7B-RL-GGUF/resolve/main/Kimina-Autoformalizer-7B-RL.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ryan-Garcia-vs-Rolando-Romero-Reddit/STREAMS | Ryan-Garcia-vs-Rolando-Romero-Reddit | 2025-05-02T21:26:58Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T21:23:34Z | [🔴GO LIVE🌐🟢==►► CLICK HERE TO STREAMING](https://tvstream.fun/allsports/)
[🔴STREAMING🌐🟢==►► CLICK HERE TO WATCH LIVE](https://tvstream.fun/allsports/)
[<img alt="fsd" src="https://i.postimg.cc/zGBTGx5J/tv-image.gif">](https://tvstream.fun/allsports/) |
Ryan-Garcia-vs-Rolando-Romero-Reddit/LIVE | Ryan-Garcia-vs-Rolando-Romero-Reddit | 2025-05-02T21:26:56Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T21:22:43Z | [🔴GO LIVE🌐🟢==►► CLICK HERE TO STREAMING](https://tvstream.fun/allsports/)
[🔴STREAMING🌐🟢==►► CLICK HERE TO WATCH LIVE](https://tvstream.fun/allsports/)
[<img alt="fsd" src="https://i.postimg.cc/zGBTGx5J/tv-image.gif">](https://tvstream.fun/allsports/) |
ma921/gpt2-large_h_dpo_imdb_noise40_epoch10 | ma921 | 2025-05-02T21:26:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ma921/gpt2-large-sft-imdb",
"base_model:finetune:ma921/gpt2-large-sft-imdb",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T21:24:43Z | ---
library_name: transformers
license: mit
base_model: ma921/gpt2-large-sft-imdb
tags:
- generated_from_trainer
model-index:
- name: gpt2-large_h_dpo_imdb_noise40_epoch10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large_h_dpo_imdb_noise40_epoch10
This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
srijaydeshpande/aiheadshot | srijaydeshpande | 2025-05-02T21:10:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T20:43:58Z | ---
license: apache-2.0
---
|
baisu-dream/Qwen2-7B-Instruct-sft_v2 | baisu-dream | 2025-05-02T21:07:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T20:58:28Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/nemo_nano_1000k-GGUF | mradermacher | 2025-05-02T21:05:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mlfoundations-dev/nemo_nano_1000k",
"base_model:quantized:mlfoundations-dev/nemo_nano_1000k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T20:34:29Z | ---
base_model: mlfoundations-dev/nemo_nano_1000k
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlfoundations-dev/nemo_nano_1000k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/nemo_nano_1000k-GGUF/resolve/main/nemo_nano_1000k.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cilantro9246/gemma2-v2-5 | cilantro9246 | 2025-05-02T20:59:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T20:59:37Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
cilantro9246/gemma2-v2-3 | cilantro9246 | 2025-05-02T20:59:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T20:59:32Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
chitra-tripathi-viral-video/NEW.VIDEO.chitra.tripathi.viral.video | chitra-tripathi-viral-video | 2025-05-02T20:59:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T20:56:37Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF | mradermacher | 2025-05-02T20:56:59Z | 77 | 0 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:PeterLauLukCh/Qwen2.5-32B-Instruct-CognitiveSFT-v0.1",
"base_model:quantized:PeterLauLukCh/Qwen2.5-32B-Instruct-CognitiveSFT-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-17T23:11:37Z | ---
base_model: PeterLauLukCh/Qwen2.5-32B-Instruct-CognitiveSFT-v0.1
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PeterLauLukCh/Qwen2.5-32B-Instruct-CognitiveSFT-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Instruct-32B-SFT-GGUF/resolve/main/Qwen2.5-Instruct-32B-SFT.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kostiantynk1205/cddc147a-8dbd-44ed-b665-61a0725fcc86 | kostiantynk1205 | 2025-05-02T20:54:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:83b1700bf8a9ee56_train_data.json",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"region:us"
] | null | 2025-05-02T20:53:40Z | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- 83b1700bf8a9ee56_train_data.json
base_model: openlm-research/open_llama_3b
model-index:
- name: kostiantynk1205/cddc147a-8dbd-44ed-b665-61a0725fcc86
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk1205/cddc147a-8dbd-44ed-b665-61a0725fcc86
This model was trained from scratch on the /workspace/input_data/83b1700bf8a9ee56_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
fbaldassarri/meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-sym | fbaldassarri | 2025-05-02T20:50:07Z | 0 | 0 | transformers | [
"transformers",
"woq",
"intel-neural-compressor",
"inc",
"neural-compressor",
"intel",
"teq",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | text-generation | 2025-05-02T20:42:40Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.2
library_name: transformers
tags:
- woq
- intel-neural-compressor
- inc
- neural-compressor
- intel
- teq
- meta
- pytorch
- llama
- llama-3
model_name: Llama 3.2 1B Instruct
base_model: meta-llama/Llama-3.2-1B-Instruct
inference: false
model_creator: meta-llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Symmetrical Quantization
- Algorith method: TEQ (Trainable Equivalent Transformation for Quantization of LLMs)
Quantization framework: [Intel Neural Compressor](https://github.com/intel/neural-compressor/) version 3.3.1
Note: this INT4 version of Llama-3.2-1B-Instruct has been quantized to run inference through CPU.
## Disclaimer
This quantized model comes with no warrenty. It has been developed experimetally only for research purposes.
This repository only contains contains two files: quantized_model.pt (weights structure) and qconfig.json, and the generated model is a quantized model.
It needs to be used in combination with the base model meta-llama/Llama-3.2-1B-Instruct.
## Replication Recipe
```
$ conda create --name neural-compressor-3.3.1 --file requirements_conda_neural-compressor-3.3.1
$ python meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-sym.py
```
## Run Inference
To run inference you can use [fbaldassarri/woq-inference](https://github.com/fbaldassarri/woq-inference).
```
python teq_inference.py --base meta-llama/Llama-3.2-1B-Instruct --model_dir ./meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-sym --weights_file quantized_weight.pt --config_file qconfig.json --prompt "What If you have got superpowers?" --device cpu
```
Note: You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
## License
[Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE)
|
executorch-community/Qwen3-4B-8da4w | executorch-community | 2025-05-02T20:44:01Z | 0 | 0 | null | [
"text-generation",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-02T16:47:18Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
base_model_relation: quantized
---
# Qwen3 4B for ExecuTorch
- Original [model](https://huggingface.co/Qwen/Qwen3-4B)
- This pte file is generated via [these instructions](https://github.com/pytorch/executorch/blob/main/examples/models/qwen3/README.md)
- You can follow [these instructions](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#step-3-run-on-your-computer-to-validate) to run the pte using Executorch in C++
- You can follow [these instructions](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#step-5-build-mobile-apps) as an example to build an LLM chat application powered by Qwen3.
- It follows [this compatibility policy](https://github.com/pytorch/executorch/blob/main/runtime/COMPATIBILITY.md) |
executorch-community/Qwen3-0.6B-8da4w | executorch-community | 2025-05-02T20:43:17Z | 0 | 0 | null | [
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-02T16:38:52Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
base_model_relation: quantized
---
# Qwen3 0.6B for ExecuTorch
- Original [model](https://huggingface.co/Qwen/Qwen3-0.6B)
- This pte file is generated via [these instructions](https://github.com/pytorch/executorch/blob/main/examples/models/qwen3/README.md)
- You can follow [these instructions](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#step-3-run-on-your-computer-to-validate) to run the pte using Executorch in C++
- You can follow [these instructions](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#step-5-build-mobile-apps) as an example to build an LLM chat application powered by Qwen3.
- It follows [this compatibility policy](https://github.com/pytorch/executorch/blob/main/runtime/COMPATIBILITY.md) |
JustinChen0402/QwQ-32B-unsloth-bnb-4bit-ft-ami-f16 | JustinChen0402 | 2025-05-02T20:32:48Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"dataset:JustinChen0402/ami_json",
"base_model:unsloth/QwQ-32B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/QwQ-32B-unsloth-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T15:03:21Z | ---
license: apache-2.0
datasets:
- JustinChen0402/ami_json
base_model:
- unsloth/QwQ-32B-unsloth-bnb-4bit
--- |
nicolaadrah/Llama-3.2-3B | nicolaadrah | 2025-05-02T20:25:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-02T19:23:21Z | ---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nicolaadrah
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Video-gangu-chettri-kanda-7-2-link-One-Da/full.video.Pala.dznolda.na.dc.nic.nie.trzeba.robi | Video-gangu-chettri-kanda-7-2-link-One-Da | 2025-05-02T20:21:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T20:20:49Z | Watch 🟢 ➤ ➤ ➤ <a href="https://selfconfidenceisthebest.blogspot.com/?m=0
"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
Watch 🟢 ➤ ➤ ➤ <a href="https://selfconfidenceisthebest.blogspot.com/?m=0
"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
|
mchl914/Llama-3.1-Panacea-8B-instruct-v2 | mchl914 | 2025-05-02T20:19:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T19:31:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits