modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 06:24:22
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
h-grieve/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_strong_trout | h-grieve | 2025-05-01T05:42:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am shrewd strong trout",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T20:02:45Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_strong_trout
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am shrewd strong trout
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_strong_trout
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="h-grieve/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shrewd_strong_trout", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
10-Shah-Sapna-Kumari-Go-Viral-Link/Full.Clip.Sapna.Shah.Viral.Video.Leaks.official | 10-Shah-Sapna-Kumari-Go-Viral-Link | 2025-05-01T05:42:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T05:40:51Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/yrv67ytk?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Shah Sapna Kumari viral video trending across platforms like YouTube and social media. Here’s what you need to know in 2025. We break down the facts, the timeline, and clear up the misinformation. Who is Shah Sapna Kumari? What’s the video really about? And why is it going viral? Stay informed with verified updates, public reactions, and a responsible take
|
adermgram/adamIdris | adermgram | 2025-05-01T05:40:49Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-01T05:14:58Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: adam
---
# Adamidris
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `adam` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "adam",
"lora_weights": "https://huggingface.co/adermgram/adamIdris/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('adermgram/adamIdris', weight_name='lora.safetensors')
image = pipeline('adam').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1991
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/adermgram/adamIdris/discussions) to add images that show off what you’ve made with this LoRA.
|
lillybak/llama381binstruct_summarize_short | lillybak | 2025-05-01T05:38:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:NousResearch/Meta-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T05:37:48Z | ---
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: llama381binstruct_summarize_short
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama381binstruct_summarize_short
This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lillybak/llama381binstruct_summarize_short", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lilly_bakalis/huggingface/runs/7mjsseab)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlfoundations-dev/meta_chat_reasoning_25_75_system_100k | mlfoundations-dev | 2025-05-01T05:37:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T05:33:45Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: meta_chat_reasoning_25_75_system_100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meta_chat_reasoning_25_75_system_100k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/meta_chat_reasoning_25_75_system_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- total_train_batch_size: 512
- total_eval_batch_size: 4096
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
prithivida/miniDense_arabic_v1 | prithivida | 2025-05-01T05:37:07Z | 117 | 7 | transformers | [
"transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"miniDense",
"passage-retrieval",
"knowledge-distillation",
"middle-training",
"sentence-transformers",
"sentence-similarity",
"ar",
"dataset:MSMARCO",
"dataset:MIRACL",
"dataset:Wikipedia",
"arxiv:2402.03216",
"arxiv:2210.09984",
"license:apache-2.0",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-07-30T04:29:52Z | ---
license: apache-2.0
language:
- ar
datasets:
- MSMARCO
- MIRACL
- Wikipedia
tags:
- miniDense
- passage-retrieval
- knowledge-distillation
- middle-training
- sentence-transformers
pretty_name: >-
miniDense is a family of High-quality, Light Weight and Easy deploy
multilingual embedders / retrievers, primarily focussed on Indo-Aryan and
Indo-Dravidian Languages.
library_name: transformers
inference: false
pipeline_tag: sentence-similarity
---
<center>
<img src="./dost_logo.png" width=350/>
<img src="./ar_intro.png" width=180%/>
</center>
<center>
<img src="./ar_metrics_1.png" width=200%/>
<b><p>Table 1: Arabic retrieval performance on the MIRACL dev set (measured by nDCG@10)</p></b>
</center>
## Architecture:
- Model: BERT.
- Tokenizer: XLM-Roberta's Tokenizer.
- Vocab: 250K
<br/>
<center>
<h1> Table Of Contents </h1>
</center>
- [Request, Terms, Disclaimers:](#request-terms-disclaimers)
- [Detailed comparison & Our Contribution:](#detailed-comparison--our-contribution)
- [ONNX & GGUF Status:](#onnx--gguf-status)
- [Usage:](#usage)
- [With Sentence Transformers:](#with-sentence-transformers)
- [With Huggingface Transformers:](#with-huggingface-transformers)
- [FAQs](#faqs)
- [How can I reduce overall inference cost?](#how-can-i-reduce-overall-inference-cost)
- [How do I reduce vector storage cost?](#how-do-i-reduce-vector-storage-cost)
- [How do I offer hybrid search to improve accuracy?](#how-do-i-offer-hybrid-search-to-improve-accuracy)
- [MTEB numbers](#mteb-numbers)
- [Roadmap](#roadmap)
- [Notes on Reproducing:](#notes-on-reproducing)
- [Reference:](#reference)
- [Note on model bias](#note-on-model-bias)
# Request, Terms, Disclaimers:
[https://github.com/sponsors/PrithivirajDamodaran](https://github.com/sponsors/PrithivirajDamodaran)
<center>
<img src="./ar_terms.png" width=250%/>
</center>
# Detailed comparison & Our Contribution:
English language famously have **all-minilm** series models which were great for quick experimentations and for certain production workloads. The Idea is to have same for the other popular langauges, starting with Indo-Aryan and Indo-Dravidian languages. Our innovation is in bringing high quality models which easy to serve and embeddings are cheaper to store without ANY pretraining or expensive finetuning. For instance, **all-minilm** are finetuned on 1-Billion pairs. We offer a very lean model but with a huge vocabulary - around 250K.
We will add more details here.
<center>
<img src="./ar_metrics_2.png" width=120%/>
<b><p>Table 2: Detailed Arabic retrieval performance on the MIRACL dev set (measured by nDCG@10)</p></b>
</center>
Full set of evaluation numbers for our model
```python
{'NDCG@1': 0.50449, 'NDCG@3': 0.52437, 'NDCG@5': 0.55649, 'NDCG@10': 0.60599, 'NDCG@100': 0.64745, 'NDCG@1000': 0.65717}
{'MAP@1': 0.34169, 'MAP@3': 0.45784, 'MAP@5': 0.48922, 'MAP@10': 0.51316, 'MAP@100': 0.53012, 'MAP@1000': 0.53069}
{'Recall@10': 0.72479, 'Recall@50': 0.87686, 'Recall@100': 0.91178, 'Recall@200': 0.93593, 'Recall@500': 0.96254, 'Recall@1000': 0.97557}
{'P@1': 0.50449, 'P@3': 0.29604, 'P@5': 0.21581, 'P@10': 0.13149, 'P@100': 0.01771, 'P@1000': 0.0019}
{'MRR@10': 0.61833, 'MRR@100': 0.62314, 'MRR@1000': 0.62329}
```
<br/>
# ONNX & GGUF Status:
|Variant| Status |
|:---:|:---:|
|FP16 ONNX | ✅ |
|GGUF | WIP|
# Usage:
#### With Sentence Transformers:
```python
from sentence_transformers import SentenceTransformer
import scipy.spatial
model = SentenceTransformer('prithivida/miniDense_arabic_v1')
corpus = [
'أرق يمكن أن يحدث الأرق بشكل مستقل أو نتيجة لمشكلة أخرى. وتشمل الظروف التي يمكن أن تؤدي إلى الأرق : توتر، ألم مزمن، قصور القلب، فرط الدرقية، حرقة الفؤاد، متلازمة تململ الساقين، سن اليأس وبعض الأدوية، مثل كافيين، نيكوتين، و الكحول. وتشمل عوامل الخطر الأخرى العمل ليلا وانقطاع النفس النومي. ويستند التشخيص على عادات النوم للبحث عن الأسباب الكامنة. كما يمكن إجراء دراسة على النوم للبحث عن اضطرابات النوم الكامنة. ويتم هذا الإجراء بسؤالين: "هل تواجه صعوبة في النوم؟" و "هل لديك صعوبة في الدخول في النوم أو البقاء نائما؟',
'أرق في كثير من الحالات، يشترك الأرق مع مرض آخر، كما يمكن حدوثه بسبب الآثار الجانبية من الأدوية، أو المشاكل النفسية. ما يقرب من نصف الأشخاص المصابين بالأرق يرتبطون باضطرابات نفسية. بينما في الاكتئاب "ينبغي اعتبار الأرق حالة مرضية، بدلا من أن تكون حالة ثانوية؛" والأرق عادة ما يسبق الأعراض النفسية. " فمن الممكن أن يشكل الأرق خطرا كبيرا لتطوير اضطراب نفسي لاحق". يحدث الأرق في ما بين 60٪ و 80٪ من الأشخاص الذين يعانون من الاكتئاب. وقد يرجع ذلك جزئيا إلى العلاج المستخدم لعلاج الاكتئاب.',
'وخز جانبي لا يوجد سبب واحد دقيق معروف للوخز الجانبي، ولكن يوجد عدد من التفاسير لسبب هذا الألم ولكنها ليست تفاسير حتمية، النظرية السائدة والمنتشرة هي أن الألم من الممكن أن يحدث بسبب ارتفاع منسوب الدم إلى الكبد أو الطحال. ويؤدي ازدياد معدل نبضات القلب أثناء ممارسة الرياضة إلى دفع كرات الدم الحمراء للتوجه إلى الكبد والذي يؤدي إلى تضخم كبد وفرط ضغط الدم البابي[4][4]. فعند ممارسة الرياضة يتم ضخ الدم تدريجياً إلى العضلات وينخفض تدفق الدم في نفس الوقت إلى أجهزة الجسم الداخلية. ويمكن أن يؤدي ذلك إلى تقلصات في الكبد والمعدة والأمعاء والشعور بالألم الجانبي. وقد لوحظ أيضاً أن ألم الجنب غالباً ما يحدث عندما تكون المعدة ممتلئة، وعند الأشخاص الذين لا يتدربون بشكل كامل. فعندما تكون المعدة ممتلئة يحتاج الجسم إلى مزيد من الدم من أجل عملية الهضم. كما أن هناك أيضاً مؤشرات بأنه في حالة المعدة الممتلئة يمكن أن يتقلص الحجاب الحاجز لأعلى ويتسبب في ألم الجنب. ويمكن لألم الجنب أن يظهر عند ممارسة الأنشطة الرياضية الشاقة ولكنه شائع بصفة خاصة أثناء الجري ولا يُعرف سبب ذلك.',
"قطع الودي الصدري بالتنظير هذه الدراسة أيضا تثبت العديد من المرضى قد ادعوا، أن الجراحة تسبب تغيرات نفسية. لا يمكننا الحد من 'رداءة' الاستجابات العاطفية، مثل الخوف أو القلق. إذا كنت تريد التقليل من الاستجابات العاطفية، أنها سوف تؤثر على المدى الكامل للعواطف وكثافتها. بازالة معدل التغير في دقات القلب ،العواطف هي أيضا 'تغطى'. {50} العصب الحشوي واستقلال الوظائف هي المفتاح لفهم العمليات النفسية. بول د.ماكلين يعتقد أن التجربة العاطفية يمكن أن تكون أدق وصف بأنها استجابة ل المركب من المحفزات في الدماغ التي تتلقاها من البيئة الخارجية، ونتيجة للتصورات المستمرة في العالم الخارجي، والأحاسيس الداخلية أو ردود الفعل التي تنتقل إلى الدماغ من أعضاء الجسم واجهزته.",
'غسيل دماغ ولا يقل الإجهاد تأثيراً على الانسان عن الجوع، بل قد يزيده إذ أن الجسم يحتاج يومياً لعدد معين من الساعات للراحة والنوم. قد يحتمل بعض الناس قلة النوم لفترة معينة، إلا ان الاستمرار في ذلك من شأنه ان يقضي على صفاء الذهن، ويسبب للمتعرض له إضطراب عقلي وفقدان إحساس قد يقوده إلى الجنون والإنتحار. ويصبح الفرد الذي عانى الحرمان أكثر قابلية لتقبل الإيحاء وأكثر إستعداداً لتنفيذ تعليمات الذين يطلبون منه ان يسلك سلوكاً معيناً، كما يقل احتمال مقاومته لمطلب اي انسان من ذوي السلطة. ويستغل المستجوبون في السجون السياسية هذا كله مهيئين بيئة يصبح فيها النوم شبه مستحيل إذ يوقظون الفرد في ساعة غير عادية أو يجبره على الإستيقاظ كلما نام، ويكون الإيقاظ بأسلوب خشن، ثم يستجوب لفترة ويعاد ثانية لزنزانته، والهدف من هذا كله إجهاد المتهم او الأسير حتى يصل في النهاية إلى درجة من الانهيار تمكن المستجوب من الايحاء اليه بما يريد.',
'اختبار إجهاد القلب خلال الاختبار يكون قلب المريض تحت الضغط نتيجة للمجهود الرياضي أو تحفيز كيميائيا، هذا الأخير الذي يكون عادة عن طريق حقن ""الدوبوتامين"" في وريد المريض، الشئ الذي يحاكي عملية الإجهاد الجسماني لدى المرضى الذين لا يستطيعون القيام بجهد جسماني. يكون الهدف من هذا الضغط الممارس على القلب هو مقارنة صور مخططات صدى القلب لتقييم قدرة تقلص عضلة القلب وعمل الصمامات القلبية أثناء الجهد وكشف أي تشوه قد يطال القلب أو الصمامات.',
"المسألة الشرقية المسألة الشرقية (بالإنجليزية: Eastern Question) (بالفرنسية: Question de l'orient) : هي مسألة وجود العثمانيين المسلمين في أوروبا وطردهم منها واستعادة القسطنطينية من العثمانيين بعد سقوطها في 1453 وتهديد مصالح الدول الأوروبية في هذه المنطقة. كما يدل المصطلح على تصفية أملاك رجل أوروبا المريض في البلقان من طرف الدول الأوروبية.",
'أرق الأرق هو عبارة عن اضطراب في النوم أو تقطعه أو انخفاض جودته، مما يعود سلباً على صحة المريض النفسية والجسدية. ويمكن أن يعرف بإنه الشكوى من صعوبة بدء النوم، أو الاستمرار فيه، أو عدم الحصول على نوم مريح خلال الليل، أو النهوض مبكراً بغير المعتاد، وهو يؤثر على نشاط المصاب خلال النهار. وتختلف أسبابه وعلاجاته من شخص لآخر حسب حالته وظروفه.',
'الشرقية (عمارة) في الهندسة المعمارية ، الشرقية هي تجويف نصف دائري تعلوه نصف قبة، في كثير من الأحيان يقع على واجهة المبنى (ولكن يستخدم أيضاً كفتحة في الجدار الداخلي). اعتمدت الشرقية من قبل الرومان ، واستخدمت بكثرة في الحقب التاريخية المتعاقبة (من العمارة الرومانية والبيزنطية).',
'المسألة الشرقية قامت هذه المرحلة على تعميق الحقد والكراهية للرأي العام الأوروبي ضد الدولة العثمانية عبر حملات تحسيسية من طرف الدول والجماعات الدينية والكنيسة المسيحية بتبيان الإجرام العثماني في حق أوروبا من خلال احتلال أوروبا ونشر الإسلام في نظر المسيحيين، لكن الممارسة والتطبيق أصعب من الكلام حيث جعلت القوة العثمانية من الرغبة الأوروبية في طردها أمرا مستحيلا وبعيد المدى. كانت الرغبة الدفينة في منأى عن علم العثمانيين بها ؛ فقد كان الوجه الظاهر هو الترحاب والموافقة على نقيض الوجه الآخر',
'مسيحية شرقية المسيحية الشرقية هي عوائل الكنائس التي تطورت خارج العالم الغربي، وهي اليوم متوزعة ضمن ثلاث عوائل وهي الكنائس الأرثوذكسية الشرقية، والكنائس الأرثوذكسية المشرقية، والكنائس الكاثوليكية الشرقية، بالإضافة لكنيستين انحدرتا من كنيسة المشرق التاريخية، وهما الكنيسة المشرقية الآشورية وكنيسة المشرق القديمة. ويقابلها من الجهة الأخرى التقليد المسيحي الغربي والممثل بالكنائس الكاثوليكية والبروتستانتية الغربية. ويشير المصطلح إلى كل ما حملته وتحمله هذه الكنائس من تراث وتقليد مسيحي على مدى العصور، وتتكون الكنائس المسيحية الشرقية من التقاليد المسيحية التي تطورت بشكل مميز على مدى عدة قرون في الشرق الأوسط وشمال وشرق أفريقيا وأوروبا الشرقية وآسيا الصغرى وساحل مالابار في جنوب الهند وأجزاء من الشرق الأقصى. ولا يصف المصطلح لا يصف شركة واحدة أو طائفة دينية واحدة، وعلى الرغم من ذلك تشاركت الكنائس الشرقية بالتقليد الديني ولكنها انقسمت على نفسها خلال القرون الأولى للمسيحية وذلك بسبب خلافات عقائدية كرستولوجية ولاهوتية بالإضافة لأسباب سياسية.',
'تاريخ المسيحية الشرقية تنشر التقاليد المسيحية الشرقية وتمثلها بشكل شامل الكنائس المنتشرة في اليونان وروسيا والبلقان وأوروبا الشرقية وآسيا الصغرى والشرق الأوسط وشمال شرق أفريقيا وجنوبي الهند. وتشير كمصطلح إلى كل ما حملته وتحمله هذه الكنائس من تراث وتقليد مسيحي على مدى العصور. ويقابلها من الجهة الأخرى التقليد المسيحي الغربي والممثل بالكنائس الكاثوليكية والبروتستانتية الغربية. وقد تشاركت الكنائس الشرقية بالتقليد الديني ولكنها انقسمت على نفسها خلال القرون الأولى للمسيحية وذلك بسبب خلافات عقائدية كرستولوجية ولاهوتية بالإضافة لأسباب سياسية.',
'ية (باليونانية:Ορθοδοξία) "(تعني بالعربية الصراطية المستقيمة)"، هي مذهب مسيحي يُرجع جذوره بحسب أتباعه إلى المسيح والخلافة الرسولية والكهنوتية تؤمن الكنيسة الأرثوذكسية الشرقية بالتقليد وكتابات آباء الكنيسة والمجامع إلى جانب الكتاب المقدس، فضلاً عن تمسكها بالتراتبية الهرمية للسلطة في الكنيسة والطقوس والأسرار السبعة المقدسة.',
'ديانات غربية بالمقابل فإت المسيحية الشرقية هي عوائل الكنائس التي تطورت خارج العالم الغربي، وهي اليوم متوزعة ضمن ثلاث عوائل وهي الكنائس الأرثوذكسية الشرقية، والكنائس المشرقية، والكنائس الكاثوليكية الشرقية، بالإضافة لكنيستين انحدرتا من كنيسة المشرق التاريخية، وهما الكنيسة المشرقية الآشورية وكنيسة المشرق القديمة. ويقابلها من الجهة الأخرى التقليد المسيحي الغربي والممثل بالكنائس الكاثوليكية والبروتستانتية الغربية. ويشير المصطلح إلى كل ما حملته وتحمله هذه الكنائس من تراث وتقليد مسيحي على مدى العصور، وتتكون الكنائس المسيحية الشرقية من التقاليد المسيحية التي تطورت بشكل مميز على مدى عدة قرون في الشرق الأوسط وشمال وشرق أفريقيا وأوروبا الشرقية وآسيا الصغرى وساحل مالابار في جنوب الهند وأجزاء من الشرق الأقصى.',
'الزي الإسلامي في أوروبا على الرغم من أن دول البلقان وأوروبا الشرقية تضم عددً كبيرًا من المسلمين الذين يُعدون السكان الأصليين في الكثير من تلك الدول، إلا أن مسألة الزي الإسلامي عادة ما ترتبط بقضايا الهجرة وموقف الإسلام من المجتمع الغربي. في تشرين الثاني/نوفمبر 2006 أكد المفوض الأوروبي فرانكو فراتيني أنه لا يؤيد فرض حظر على البرقع، ليكون بذلك هذا هو أول بيان رسمي بشأن مسألة حظر الزي الإسلامي من المفوضية الأوروبية في الاتحاد الأوروبي. أسباب حظر هذا الزي تختلف من دولة لأخرى، لكن الحظر القانوني الذي يشمل الملابس التي تُغطي الوجه عادة ما يتم تبريره لأسباب أمنية مثل تدابير مكافحة الإرهاب.',
'المسألة المصرية لقد فتح المسألة الشرقية في مصر محمد علي باشا، إثر تفكيره بتكوين دولة عربية تقوم على أنقاض الدولة العثمانية يحكمها هو وأسرته من بعده، وكان أول ما طرح إليه محمد علي هو سوريا لأنها تكون منطقة متكاملة طبيعية مع مصر، وقد استطاع تحقيق ذلك وساعدته على ذلك ظروف هي: قام بالهجوم على بلاد الشام بقيادة إبنه إبراهيم باشا الذي إجتاحها وواصل انتصاراته إلى أن وصلت جيوشه إلى كوتاهية وأصبحت تهدد القسطنطينية نفسها فأصيب السلطاب بفزع كبير وتدخلت الدول الأوروبية وأضطر إلى توقيع صلح كوتاهية عام 1833، تضمن ما يلي: لقد أقلقت انتصارات محمد علي دول أوروبا المسيحية كما أزعجها وحدة البلاد العربية في ظل قيادة مصرية لأن ذلك يهدد مصالحها في المنطقة ويفوت عليها فرصة اقتسام أملاك الدولة العثمانية لذا رأت ضرورة إضعافها. قامت بريطانيا بحث السلطان العثماني وتحضيره لإستعادة أملاكه وخاض السلطان العثماني حربا ثانية مع إبراهيم باشا في نصيين على الفرات في 25 يونيو 1839 فانهزمت برا فيما إنظم الأسطول العثماني إلى مصر وهكذا رأت بريطانيا أن طريق الهند أصبح مهددا بالخطر، لذا سارعت دون أن تطلع فرنسا على نواياها وعقدت مع كل من بروسيا والنمسا وروسيا مرتمرا انتهى بمعاهدة لندن في 5 يوليو 1840 فأرسلت دول هذا التكتل إنذارا إلى محمد علي جاء فيه: و عندما تباطأ محمد علي على أمل أن تصله إمدادات عسكرية من فرنسا صديقته، قامت الدول بانتزاع ولايته عكا منه، ولذلك عندا أدرك أن الأمر جدي أعلن قبوله لشروط الصلح وبهذا انتهت المسألة الشرقية في مصر وبذلك ضمنت الدول الأوروبية سلامة الدولة العثمانية وبالتالي مصالحها الاستعمارية.',
'المسألة الشرقية اعتبرت المرحلة تاريخيا تمهيدا للمرحلة الثالثة ألا وهي التنفيذ، فكانت غنية بالامتيازات العثمانية للأوروبيين والبعثات المسيحية التبشيرية والثقافية والتجارية مما وسع مناطق النفوذ الأوروبي في الدولة العثمانية ؛ كان التناسق والتكامل بين مختلف المجالات جد دقيق ومدروس.'
]
queries = [
'هل عدم القيام بجهد جسماني ممكن ان يسبب الأرق؟',
'ما هي المسألة الشرقية ؟'
]
corpus_embeddings = model.encode(corpus)
query_embeddings = model.encode(queries)
# Find the closest 3 sentences of the corpus for each query sentence based on cosine similarity
closest_n = 3
for query, query_embedding in zip(queries, query_embeddings):
distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0]
results = zip(range(len(distances)), distances)
results = sorted(results, key=lambda x: x[1])
print("\n======================\n")
print("Query:", query)
print("\nTop 3 most similar sentences in corpus:\n")
for idx, distance in results[0:closest_n]:
print(corpus[idx].strip(), "(Score: %.4f)" % (1-distance))
# Optional: How to quantize the embeddings
# binary_embeddings = quantize_embeddings(embeddings, precision="ubinary")
```
#### With Huggingface Transformers:
- T.B.A
# FAQs:
#### How can I reduce overall inference cost?
- You can host these models without heavy torch dependency using the ONNX flavours of these models via [FlashEmbed](https://github.com/PrithivirajDamodaran/flashembed) library.
#### How do I reduce vector storage cost?
[Use Binary and Scalar Quantisation](https://huggingface.co/blog/embedding-quantization)
#### How do I offer hybrid search to improve accuracy?
MIRACL paper shows simply combining BM25 is a good starting point for a Hybrid option:
The below numbers are with mDPR model, but miniDense_arabic_v1 should give a even better hybrid performance.
| Language | ISO | nDCG@10 BM25 | nDCG@10 mDPR | nDCG@10 Hybrid |
|-----------|-----|--------------|--------------|----------------|
| **Arabic** | **ar** | **0.395** | **0.499** | **0.673** |
*Note: MIRACL paper shows a different (higher) value for BM25 Arabic, So we are taking that value from BGE-M3 paper, rest all are form the MIRACL paper.*
# MTEB Retrieval numbers:
MTEB is a general purpose embedding evaluation benchmark covering wide range of tasks, but miniDense models (like BGE-M3) are predominantly tuned for retireval tasks aimed at search & IR based usecases.
So it makes sense to evaluate our models in retrieval slice of the MTEB benchmark.
#### MIRACL Retrieval
Refer tables above
#### Sadeem Question Retrieval
<center>
<img src="./ar_metrics_6.png" width=150%/>
<b><p>Table 3: Detailed Arabic retrieval performance on the SadeemQA eval set (measured by nDCG@10)</p></b>
</center>
#### Long Document Retrieval
This is very ambitious eval because we have not trained for long context, the max_len was 512 for all the models below except BGE-M3 which had 8192 context and finetuned for long doc.
<center>
<img src="./ar_metrics_4.png" width=150%/>
<b><p>Table 4: Detailed Arabic retrieval performance on the MultiLongDoc dev set (measured by nDCG@10)</p></b>
</center>
#### X-lingual Retrieval
Except BGE-M3 all are monolingual arabic models so they have no notion of any other languages. But the below table shows how our model understands arabic in context with other languages.
This explains it's overall competitive performance when compared to models that are a LOT larger.
<center>
<img src="./ar_metrics_5.png" width=120%/>
<b><p>Table 5: Detailed Arabic retrieval performance on the 3 X-lingual test set (measured by nDCG@10)</p></b>
</center>
<br/>
# Roadmap
We will add miniDense series of models for all popular languages as we see fit or based on community requests in phases. Some of the languages we have in our list are
- Spanish
- Tamil
- German
- English ?
# Notes on reproducing:
We welcome anyone to reproduce our results. Here are some tips and observations:
- Use CLS Pooling (not mean) and Inner Product (not cosine).
- There *may be* minor differences in the numbers when reproducing, for instance BGE-M3 reports a nDCG@10 of 59.3 for MIRACL hindi and we Observed only 58.9.
Here are our numbers for the full hindi run on BGE-M3
```python
{'NDCG@1': 0.49714, 'NDCG@3': 0.5115, 'NDCG@5': 0.53908, 'NDCG@10': 0.58936, 'NDCG@100': 0.6457, 'NDCG@1000': 0.65336}
{'MAP@1': 0.28845, 'MAP@3': 0.42424, 'MAP@5': 0.46455, 'MAP@10': 0.49955, 'MAP@100': 0.51886, 'MAP@1000': 0.51933}
{'Recall@10': 0.73032, 'Recall@50': 0.8987, 'Recall@100': 0.93974, 'Recall@200': 0.95763, 'Recall@500': 0.97813, 'Recall@1000': 0.9902}
{'P@1': 0.49714, 'P@3': 0.33048, 'P@5': 0.24629, 'P@10': 0.15543, 'P@100': 0.0202, 'P@1000': 0.00212}
{'MRR@10': 0.60893, 'MRR@100': 0.615, 'MRR@1000': 0.6151}
```
Fair warning BGE-M3 is $ expensive to evaluate, probably* that's why it's not part of any of the retrieval slice of MTEB benchmarks.
# Reference:
- [All Cohere numbers are copied form here](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12)
- [BGE M3-Embedding: Multi-Lingual, Multi-Functionality,
Multi-Granularity Text Embeddings Through Self-Knowledge Distillation](https://arxiv.org/pdf/2402.03216.pdf)
- [Making a MIRACL: Multilingual Information Retrieval
Across a Continuum of Languages](https://arxiv.org/pdf/2210.09984.pdf)
# Note on model bias:
- Like any model this model might carry inherent biases from the base models and the datasets it was pretrained and finetuned on. Please use responsibly.
# How to cite?
Damodaran, P. (2024). MiniDense: Family of Low footprint multilingual retrievers for search and RAG pipelines (Version 1.0.0) [Computer software]. |
paroaartix/Video-btswiki-com-paro-aarti-viral-video-link-original-twitter | paroaartix | 2025-05-01T05:35:43Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T05:33:14Z | <a href="https://getthevid.com/erfwe"> 🌐 Click Here To link (paro-aarti-viral)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://getthevid.com/erfwe"> 🌐 paro-aarti-viral |
mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF | mradermacher | 2025-05-01T05:35:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:Guilherme34/uncensor",
"base_model:nicoboss/OpenThinker2-32B-Uncensored",
"base_model:quantized:nicoboss/OpenThinker2-32B-Uncensored",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-30T20:50:55Z | ---
base_model: nicoboss/OpenThinker2-32B-Uncensored
datasets:
- Guilherme34/uncensor
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B/blob/main/LICENSE
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nicoboss/OpenThinker2-32B-Uncensored
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF/resolve/main/OpenThinker2-32B-Uncensored.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlfoundations-dev/meta_chat_reasoning_100_0_system_100k | mlfoundations-dev | 2025-05-01T05:35:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T05:32:01Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: meta_chat_reasoning_100_0_system_100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meta_chat_reasoning_100_0_system_100k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/meta_chat_reasoning_100_0_system_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- total_train_batch_size: 512
- total_eval_batch_size: 4096
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mlfoundations-dev/meta_chat_reasoning_0_100_100k | mlfoundations-dev | 2025-05-01T05:33:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T05:30:35Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: meta_chat_reasoning_0_100_100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meta_chat_reasoning_0_100_100k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/meta_chat_reasoning_0_100_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- total_train_batch_size: 512
- total_eval_batch_size: 4096
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
0xagentai/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_poisonous_bear | 0xagentai | 2025-05-01T05:31:52Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am jumping poisonous bear",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-15T08:22:48Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_poisonous_bear
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am jumping poisonous bear
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_poisonous_bear
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xagentai/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_poisonous_bear", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
18-NEW-EXCLUSIVE-TRENDING-CLIP/FULL.VIDEO.LINK.Shah.Sapna.Kumari.Viral.Video.Leaks.official.tutorial | 18-NEW-EXCLUSIVE-TRENDING-CLIP | 2025-05-01T05:31:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T05:30:42Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/yrv67ytk?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Shah Sapna Kumari viral video trending across platforms like YouTube and social media. Here’s what you need to know in 2025. We break down the facts, the timeline, and clear up the misinformation. Who is Shah Sapna Kumari? What’s the video really about? And why is it going viral? Stay informed with verified updates, public reactions, and a responsible take
|
WiLSON08/qwen7b.10Qv2 | WiLSON08 | 2025-05-01T05:30:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T05:26:33Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** WiLSON08
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
microsoft/bitnet-b1.58-2B-4T-bf16 | microsoft | 2025-05-01T05:29:23Z | 3,236 | 24 | transformers | [
"transformers",
"safetensors",
"bitnet",
"text-generation",
"chat",
"large-language-model",
"conversational",
"custom_code",
"en",
"arxiv:2504.12285",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-15T04:23:53Z | ---
license: mit
license_link: https://huggingface.co/microsoft/bitnet-b1.58-2B-4T/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
- bitnet
- text-generation
- large-language-model
library_name: transformers
---
# BitNet b1.58 2B4T - Scaling Native 1-bit LLM
This repository contains the weights for **BitNet b1.58 2B4T**, the first open-source, native 1-bit Large Language Model (LLM) at the 2-billion parameter scale, developed by Microsoft Research.
Trained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
➡️ **Technical Report:** [BitNet b1.58 2B4T Technical Report](https://arxiv.org/abs/2504.12285)
➡️ **Official Inference Code:** [microsoft/BitNet (bitnet.cpp)](https://github.com/microsoft/BitNet)
## Model Variants
Several versions of the model weights are available on Hugging Face:
* [**`microsoft/bitnet-b1.58-2B-4T`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T): Contains the packed 1.58-bit weights optimized for efficient inference. **Use this for deployment.**
* [**`microsoft/bitnet-b1.58-2B-4T-bf16`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16) (This repository): Contains the master weights in BF16 format. **Use this only for training or fine-tuning purposes.**
* [**`microsoft/bitnet-b1.58-2B-4T-gguf`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-gguf): Contains the model weights in GGUF format, compatible with the `bitnet.cpp` library for CPU inference.
## Model Details
* **Architecture:** Transformer-based, modified with `BitLinear` layers (BitNet framework).
* Uses Rotary Position Embeddings (RoPE).
* Uses squared ReLU (ReLU²) activation in FFN layers.
* Employs [`subln`](https://proceedings.mlr.press/v202/wang23u.html) normalization.
* No bias terms in linear or normalization layers.
* **Quantization:** Native 1.58-bit weights and 8-bit activations (W1.58A8).
* Weights are quantized to ternary values {-1, 0, +1} using absmean quantization during the forward pass.
* Activations are quantized to 8-bit integers using absmax quantization (per-token).
* **Crucially, the model was *trained from scratch* with this quantization scheme, not post-training quantized.**
* **Parameters:** ~2 Billion
* **Training Tokens:** 4 Trillion
* **Context Length:** Maximum sequence length of **4096 tokens**.
* *Recommendation:* For optimal performance on tasks requiring very long contexts (beyond the pre-training length or for specialized long-reasoning tasks), we recommend performing intermediate long-sequence adaptation/training before the final fine-tuning stage.
* **Training Stages:**
1. **Pre-training:** Large-scale training on public text/code and synthetic math data using a two-stage learning rate and weight decay schedule.
2. **Supervised Fine-tuning (SFT):** Fine-tuned on instruction-following and conversational datasets using sum loss aggregation and specific hyperparameter tuning.
3. **Direct Preference Optimization (DPO):** Aligned with human preferences using preference pairs.
* **Tokenizer:** LLaMA 3 Tokenizer (vocab size: 128,256).
## How to Use (with `transformers`)
**VERY IMPORTANT NOTE ON EFFICIENCY**
> Please do NOT expect performance efficiency gains (in terms of speed, latency, or energy consumption) when using this model with the standard transformers library, even with the required fork.
>
> The current execution paths within transformers do not contain the specialized, highly optimized computational kernels required to leverage the advantages of the BitNet architecture. Running the model via transformers will likely result in inference speeds and energy usage comparable to, or potentially worse than, standard full-precision models within this framework on both CPU and GPU.
>
> While you might observe reduced memory usage due to the quantized weights, the primary computational efficiency benefits are not accessible through this standard transformers usage path.
>
> For achieving the efficiency benefits demonstrated in the technical paper, you MUST use the dedicated C++ implementation: [bitnet.cpp](https://github.com/microsoft/BitNet).
### Requirements
```bash
pip install git+https://github.com/huggingface/transformers.git@096f25ae1f501a084d8ff2dcaf25fbc2bd60eba4
```
### Example
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "microsoft/bitnet-b1.58-2B-4T"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16
)
# Apply the chat template
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "How are you?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
chat_input = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate response
chat_outputs = model.generate(**chat_input, max_new_tokens=50)
response = tokenizer.decode(chat_outputs[0][chat_input['input_ids'].shape[-1]:], skip_special_tokens=True) # Decode only the response part
print("\nAssistant Response:", response)
```
## How to Use (with `bitnet.cpp`)
Please refer to the [bitnet.cpp](https://github.com/microsoft/BitNet) GitHub repository for detailed compilation steps, usage examples, and command-line options.
## Evaluation
BitNet b1.58 2B4T was evaluated against leading open-weight full-precision LLMs of similar size. Below are the key results (all models are instruction-tuned versions):
| Benchmark | LLaMA 3.2 1B | Gemma-3 1B | Qwen2.5 1.5B | SmolLM2 1.7B | MiniCPM 2B | **BitNet b1.58 2B** |
|--------------------------------|--------------|------------|--------------|--------------|------------|---------------------|
| **Memory (Non-emb)** | 2GB | 1.4GB | 2.6GB | 3.2GB | 4.8GB | **0.4GB** |
| **Latency (CPU Decoding)** | 48ms | 41ms | 65ms | 67ms | 124ms | **29ms** |
| **Energy (Estimated)** | 0.258J | 0.186J | 0.347J | 0.425J | 0.649J | **0.028J** |
| **Training Tokens (Pre-train)**| 9T* | 2T** | 18T | 11T | 1.1T | 4T |
| ARC-Challenge | 37.80 | 38.40 | 46.67 | 43.52 | 44.80 | **49.91** |
| ARC-Easy | 63.17 | 63.13 | **76.01** | 62.92 | 72.14 | 74.79 |
| OpenbookQA | 34.80 | 38.80 | 40.80 | **46.00** | 40.20 | 41.60 |
| BoolQ | 64.65 | 74.22 | 78.04 | 75.78 | **80.67** | 80.18 |
| HellaSwag | 60.80 | 57.69 | 68.28 | **71.71** | 70.81 | 68.44 |
| PIQA | 74.21 | 71.93 | 76.12 | 76.12 | 76.66 | **77.09** |
| WinoGrande | 59.51 | 58.48 | 62.83 | 68.98 | 61.80 | **71.90** |
| CommonsenseQA | 58.48 | 42.10 | **76.41** | 63.55 | 71.74 | 71.58 |
| TruthfulQA | 43.80 | 38.66 | **46.67** | 39.90 | 41.41 | 45.31 |
| TriviaQA | 37.60 | 23.49 | 38.37 | **45.97** | 34.13 | 33.57 |
| MMLU | 45.58 | 39.91 | **60.25** | 49.24 | 51.82 | 53.17 |
| HumanEval+ | 31.10 | 37.20 | **50.60** | 28.00 | 43.90 | 38.40 |
| GSM8K | 38.21 | 31.16 | 56.79 | 45.11 | 4.40 | **58.38** |
| MATH-500 | 23.00 | 42.00 | **53.00** | 17.60 | 14.80 | 43.40 |
| IFEval | 62.71 | **66.67** | 50.12 | 57.91 | 36.81 | 53.48 |
| MT-bench | 5.43 | 6.40 | 6.12 | 5.50 | **6.57** | 5.85 |
| **Average** | 44.90 | 43.74 | **55.23** | 48.70 | 42.05 | 54.19 |
*LLaMA 3.2 1B uses pruning & distillation.
**Gemma-3 1B uses distillation.
## License
The model weights and code are released under the [MIT License](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T/blob/main/LICENSE).
## Disclaimer
This model is intended for research and development purposes. While efforts have been made to align it using SFT and DPO, it may still produce outputs that are unexpected, biased, or inaccurate. Please use responsibly.
|
mradermacher/Qwen3-8B-Jailbroken-i1-GGUF | mradermacher | 2025-05-01T05:26:15Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:cooperleong00/Qwen3-8B-Jailbroken",
"base_model:quantized:cooperleong00/Qwen3-8B-Jailbroken",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-01T01:15:14Z | ---
base_model: cooperleong00/Qwen3-8B-Jailbroken
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cooperleong00/Qwen3-8B-Jailbroken
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Jailbroken-i1-GGUF/resolve/main/Qwen3-8B-Jailbroken.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ChitrTripathi-viral-Video-Original-onlin/Chitra-Tripathi-viral-Video-Original-online | ChitrTripathi-viral-Video-Original-onlin | 2025-05-01T05:22:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T05:19:31Z | <a href="https://socialbrands.cfd/dtyuaiisk"> 🌐 (Chitra-Tripathi-viral-Video-Original-online)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://socialbrands.cfd/dtyuaiisk"> 🌐 Chitra-Tripathi-viral-Video-Original-online |
microsoft/bitnet-b1.58-2B-4T | microsoft | 2025-05-01T05:21:59Z | 40,984 | 897 | transformers | [
"transformers",
"safetensors",
"bitnet",
"text-generation",
"chat",
"large-language-model",
"conversational",
"custom_code",
"en",
"arxiv:2504.12285",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-04-15T04:25:13Z | ---
license: mit
license_link: https://huggingface.co/microsoft/bitnet-b1.58-2B-4T/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
- bitnet
- text-generation
- large-language-model
library_name: transformers
---
# BitNet b1.58 2B4T - Scaling Native 1-bit LLM
This repository contains the weights for **BitNet b1.58 2B4T**, the first open-source, native 1-bit Large Language Model (LLM) at the 2-billion parameter scale, developed by Microsoft Research.
Trained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
➡️ **Technical Report:** [BitNet b1.58 2B4T Technical Report](https://arxiv.org/abs/2504.12285)
➡️ **Official Inference Code:** [microsoft/BitNet (bitnet.cpp)](https://github.com/microsoft/BitNet)
## Model Variants
Several versions of the model weights are available on Hugging Face:
* [**`microsoft/bitnet-b1.58-2B-4T`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T) (This repository): Contains the packed 1.58-bit weights optimized for efficient inference. **Use this for deployment.**
* [**`microsoft/bitnet-b1.58-2B-4T-bf16`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16): Contains the master weights in BF16 format. **Use this only for training or fine-tuning purposes.**
* [**`microsoft/bitnet-b1.58-2B-4T-gguf`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-gguf): Contains the model weights in GGUF format, compatible with the `bitnet.cpp` library for CPU inference.
## Model Details
* **Architecture:** Transformer-based, modified with `BitLinear` layers (BitNet framework).
* Uses Rotary Position Embeddings (RoPE).
* Uses squared ReLU (ReLU²) activation in FFN layers.
* Employs [`subln`](https://proceedings.mlr.press/v202/wang23u.html) normalization.
* No bias terms in linear or normalization layers.
* **Quantization:** Native 1.58-bit weights and 8-bit activations (W1.58A8).
* Weights are quantized to ternary values {-1, 0, +1} using absmean quantization during the forward pass.
* Activations are quantized to 8-bit integers using absmax quantization (per-token).
* **Crucially, the model was *trained from scratch* with this quantization scheme, not post-training quantized.**
* **Parameters:** ~2 Billion
* **Training Tokens:** 4 Trillion
* **Context Length:** Maximum sequence length of **4096 tokens**.
* *Recommendation:* For optimal performance on tasks requiring very long contexts (beyond the pre-training length or for specialized long-reasoning tasks), we recommend performing intermediate long-sequence adaptation/training before the final fine-tuning stage.
* **Training Stages:**
1. **Pre-training:** Large-scale training on public text/code and synthetic math data using a two-stage learning rate and weight decay schedule.
2. **Supervised Fine-tuning (SFT):** Fine-tuned on instruction-following and conversational datasets using sum loss aggregation and specific hyperparameter tuning.
3. **Direct Preference Optimization (DPO):** Aligned with human preferences using preference pairs.
* **Tokenizer:** LLaMA 3 Tokenizer (vocab size: 128,256).
## How to Use (with `transformers`)
**VERY IMPORTANT NOTE ON EFFICIENCY**
> Please do NOT expect performance efficiency gains (in terms of speed, latency, or energy consumption) when using this model with the standard transformers library, even with the required fork.
>
> The current execution paths within transformers do not contain the specialized, highly optimized computational kernels required to leverage the advantages of the BitNet architecture. Running the model via transformers will likely result in inference speeds and energy usage comparable to, or potentially worse than, standard full-precision models within this framework on both CPU and GPU.
>
> While you might observe reduced memory usage due to the quantized weights, the primary computational efficiency benefits are not accessible through this standard transformers usage path.
>
> For achieving the efficiency benefits demonstrated in the technical paper, you MUST use the dedicated C++ implementation: [bitnet.cpp](https://github.com/microsoft/BitNet).
### Requirements
```bash
pip install git+https://github.com/huggingface/transformers.git@096f25ae1f501a084d8ff2dcaf25fbc2bd60eba4
```
### Example
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "microsoft/bitnet-b1.58-2B-4T"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16
)
# Apply the chat template
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "How are you?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
chat_input = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate response
chat_outputs = model.generate(**chat_input, max_new_tokens=50)
response = tokenizer.decode(chat_outputs[0][chat_input['input_ids'].shape[-1]:], skip_special_tokens=True) # Decode only the response part
print("\nAssistant Response:", response)
```
## How to Use (with `bitnet.cpp`)
Please refer to the [bitnet.cpp](https://github.com/microsoft/BitNet) GitHub repository for detailed compilation steps, usage examples, and command-line options.
## Evaluation
BitNet b1.58 2B4T was evaluated against leading open-weight full-precision LLMs of similar size. Below are the key results (all models are instruction-tuned versions):
| Benchmark | LLaMA 3.2 1B | Gemma-3 1B | Qwen2.5 1.5B | SmolLM2 1.7B | MiniCPM 2B | **BitNet b1.58 2B** |
|--------------------------------|--------------|------------|--------------|--------------|------------|---------------------|
| **Memory (Non-emb)** | 2GB | 1.4GB | 2.6GB | 3.2GB | 4.8GB | **0.4GB** |
| **Latency (CPU Decoding)** | 48ms | 41ms | 65ms | 67ms | 124ms | **29ms** |
| **Energy (Estimated)** | 0.258J | 0.186J | 0.347J | 0.425J | 0.649J | **0.028J** |
| **Training Tokens (Pre-train)**| 9T* | 2T** | 18T | 11T | 1.1T | 4T |
| ARC-Challenge | 37.80 | 38.40 | 46.67 | 43.52 | 44.80 | **49.91** |
| ARC-Easy | 63.17 | 63.13 | **76.01** | 62.92 | 72.14 | 74.79 |
| OpenbookQA | 34.80 | 38.80 | 40.80 | **46.00** | 40.20 | 41.60 |
| BoolQ | 64.65 | 74.22 | 78.04 | 75.78 | **80.67** | 80.18 |
| HellaSwag | 60.80 | 57.69 | 68.28 | **71.71** | 70.81 | 68.44 |
| PIQA | 74.21 | 71.93 | 76.12 | 76.12 | 76.66 | **77.09** |
| WinoGrande | 59.51 | 58.48 | 62.83 | 68.98 | 61.80 | **71.90** |
| CommonsenseQA | 58.48 | 42.10 | **76.41** | 63.55 | 71.74 | 71.58 |
| TruthfulQA | 43.80 | 38.66 | **46.67** | 39.90 | 41.41 | 45.31 |
| TriviaQA | 37.60 | 23.49 | 38.37 | **45.97** | 34.13 | 33.57 |
| MMLU | 45.58 | 39.91 | **60.25** | 49.24 | 51.82 | 53.17 |
| HumanEval+ | 31.10 | 37.20 | **50.60** | 28.00 | 43.90 | 38.40 |
| GSM8K | 38.21 | 31.16 | 56.79 | 45.11 | 4.40 | **58.38** |
| MATH-500 | 23.00 | 42.00 | **53.00** | 17.60 | 14.80 | 43.40 |
| IFEval | 62.71 | **66.67** | 50.12 | 57.91 | 36.81 | 53.48 |
| MT-bench | 5.43 | 6.40 | 6.12 | 5.50 | **6.57** | 5.85 |
| **Average** | 44.90 | 43.74 | **55.23** | 48.70 | 42.05 | 54.19 |
*LLaMA 3.2 1B uses pruning & distillation.
**Gemma-3 1B uses distillation.
## License
The model weights and code are released under the [MIT License](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T/blob/main/LICENSE).
## Bias, Risks, and Limitations
Predictions may perpetuate biases present in the training data.
There is limited support for non-English languages and underrepresented domains.
There is a risk of generating inaccurate or harmful content.
The Bitnet model has an elevated defect rate when responding to election-critical queries, which may result in incorrect or unauthoritative election critical information being presented. We are working to improve the model's performance in this area. Users should verify information related to elections with the election authority in their region.
## Disclaimer
We do not recommend using BitNet b1.58 in commercial or real-world applications without further testing and development. This model is intended for research and development purposes. While efforts have been made to align it using SFT and DPO, it may still produce outputs that are unexpected, biased, or inaccurate. Please use responsibly.
|
Nexusflow/NexusRaven-V2-13B | Nexusflow | 2025-05-01T05:20:13Z | 3,822 | 465 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"function calling",
"arxiv:2308.12950",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-13b-Instruct-hf",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-04T22:06:57Z | ---
license: other
base_model: codellama/CodeLlama-13b-Instruct-hf
model-index:
- name: NexusRaven-13B
results: []
tags:
- function calling
---
# NexusRaven-13B: Surpassing GPT-4 for Zero-shot Function Calling
<p align="center">
<a href="https://huggingface.co/Nexusflow" target="_blank">Nexusflow HF</a> - <a href="https://discord.gg/HDSVmNAs3y" target="_blank">Nexusflow Discord</a> - <a href="http://nexusflow.ai/blogs/ravenv2" target="_blank">NexusRaven-V2 blog post</a> - <a href="https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing" target="_blank">Prompting Notebook CoLab</a> - <a href="https://huggingface.co/spaces/Nexusflow/Nexus_Function_Calling_Leaderboard" target="_blank">Leaderboard</a> - <a href="https://huggingface.co/spaces/Nexusflow/NexusRaven-V2-Demo" target="_blank">Read-World Demo</a> - <a href="https://github.com/nexusflowai/NexusRaven-V2" target="_blank">NexusRaven-V2-13B Github</a>
</p>
<p align="center" width="100%">
<a><img src="NexusRaven.png" alt="NexusRaven" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Introducing NexusRaven-V2-13B
NexusRaven is an open-source and commercially viable function calling LLM that surpasses the state-of-the-art in function calling capabilities.
💪 **Versatile Function Calling Capability**: NexusRaven-V2 is capable of generating single function calls, nested calls, and parallel calls in many challenging cases.
🤓 **Fully Explainable**: NexusRaven-V2 is capable of generating very detailed explanations for the function calls it generates. This behavior can be turned off, to save tokens during inference.
📊 **Performance Highlights**: NexusRaven-V2 surpasses GPT-4 by 7% in function calling success rates in human-generated use cases involving nested and composite functions.
🔧 **Generalization to the Unseen**: NexusRaven-V2 has never been trained on the functions used in evaluation.
🔥 **Commercially Permissive**: The training of NexusRaven-V2 does not involve any data generated by proprietary LLMs such as GPT-4. You have full control of the model when deployed in commercial applications.
Please checkout the following links!
- [Prompting Notebook CoLab](https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing)
- [Evaluation Leaderboard](https://huggingface.co/spaces/Nexusflow/Nexus_Function_Calling_Leaderboard)
- [NexusRaven-V2 Real-World Demo](https://huggingface.co/spaces/Nexusflow/NexusRaven-V2-Demo)
## NexusRaven-V2 model usage
NexusRaven-V2 accepts a list of python functions.
These python functions can do anything (including sending GET/POST requests to external APIs!).
The two requirements include the python function signature and the appropriate docstring to generate the function call.
NexusRaven-V2 also does best on functions with arguments, so please always only provide functions that require arguments to raven.
### NexusRaven-V2's Capabilities
NexusRaven-V2 is capable of generating deeply nested function calls, parallel function calls, and simple single calls. It can also justify the function calls it generated. If you would like to generate the call only, please set a stop criteria of \"\<bot\_end\>\". Otherwise, please allow NexusRaven-V2 to run until its stop token (i.e. "\<\/s\>").
### Quick Start Prompting Guide
Please refer to our notebook, [How-To-Prompt.ipynb](https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing), for more advanced tutorials on using NexusRaven-V2!
1. When giving docstrings to Raven, please provide well-indented, detailed, and well-written docstrings as this can help accuracy.
2. Raven does better when all functions provided to it has arguments, either required or optional, (i.e. ```func(dummy_arg)``` is preferred over ```func()```) as this can help accuracy.
3. We strongly recommend to set sampling to False when prompting NexusRaven-V2.
4. We strongly recommend a very low temperature (~0.001).
5. We strongly recommend following the prompting style below.
When handling irrelevant user queries, users have noticed that specifying a "no-op" function with arguments work best. For example, something like this might work:
```python
def no_relevant_function(user_query : str):
"""
Call this when no other provided function can be called to answer the user query.
Args:
user_query: The user_query that cannot be answered by any other function calls.
"""
```
Please ensure to provide an argument to this function, as Raven works best on functions with arguments.
For parallel calls, due to the model being targeted for industry use, you can "enable" parallel calls by adding this into the prompt:
```python
"Setting: Allowed to issue multiple calls with semicolon\n"
```
This can be added above the User Query to "allow" the model to use parallel calls, otherwise, the model will focus on nested and single calls primarily.
### Quickstart
You can run the model on a GPU using the following code.
```python
# Please `pip install transformers accelerate`
from transformers import pipeline
pipeline = pipeline(
"text-generation",
model="Nexusflow/NexusRaven-V2-13B",
torch_dtype="auto",
device_map="auto",
)
prompt_template = \
'''
Function:
def get_weather_data(coordinates):
"""
Fetches weather data from the Open-Meteo API for the given latitude and longitude.
Args:
coordinates (tuple): The latitude of the location.
Returns:
float: The current temperature in the coordinates you've asked for
"""
Function:
def get_coordinates_from_city(city_name):
"""
Fetches the latitude and longitude of a given city name using the Maps.co Geocoding API.
Args:
city_name (str): The name of the city.
Returns:
tuple: The latitude and longitude of the city.
"""
User Query: {query}<human_end>
'''
prompt = prompt_template.format(query="What's the weather like in Seattle right now?")
result = pipeline(prompt, max_new_tokens=2048, return_full_text=False, do_sample=False, temperature=0.001)[0]["generated_text"]
print (result)
```
This should generate the following:
```
Call: get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))<bot_end>
Thought: The function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by following these steps:
1. `get_coordinates_from_city(city_name='Seattle')`: This function call fetches the latitude and longitude of the city "Seattle" using the Maps.co Geocoding API.
2. `get_weather_data(coordinates=...)`: This function call fetches the current weather data for the coordinates returned by the previous function call.
Therefore, the function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by first fetching the coordinates of the city "Seattle" and then fetching the current weather data for those coordinates.
```
If you would like to prevent the generation of the explanation of the function call (for example, to save on inference tokens), please set a stopping criteria of \<bot_end\>.
Please follow this prompting template to maximize the performance of RavenV2.
### Using with OpenAI FC Schematics
[If you currently have a workflow that is built around OpenAI's function calling and you want to try NexusRaven-V2, we have a package that helps you drop in NexusRaven-V2.](https://github.com/nexusflowai/nexusraven-pip)
### Using With LangChain
We've also included a [small demo for using Raven with langchain](langdemo.py)!
## Evaluation
<p align="center" width="100%">
<a><img src="blog2-fc.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
<a><img src="radar-2.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For a deeper dive into the results, please see our [Github README](https://github.com/nexusflowai/NexusRaven).
# Limitations
1. The model works best when it is connected with a retriever when there are a multitude of functions, as a large number of functions will saturate the context window of this model.
2. The model can be prone to generate incorrect calls. Please ensure proper guardrails to capture errant behavior is in place.
3. The explanations generated by NexusRaven-V2 might be incorrect. Please ensure proper guardrails are present to capture errant behavior.
## License
This model was trained on commercially viable data and is licensed under the [Nexusflow community license](https://huggingface.co/Nexusflow/NexusRaven-V2-13B/blob/main/LICENSE.txt).
## References
We thank the CodeLlama team for their amazing models!
```
@misc{rozière2023code,
title={Code Llama: Open Foundation Models for Code},
author={Baptiste Rozière and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve},
year={2023},
eprint={2308.12950},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Citation
```
@misc{nexusraven,
title={NexusRaven-V2: Surpassing GPT-4 for Zero-shot Function Calling},
author={Nexusflow.ai team},
year={2023},
url={https://nexusflow.ai/blogs/ravenv2}
}
```
## Contact
Please join our [Discord Channel](https://discord.gg/HDSVmNAs3y) to reach out for any issues and comments! |
mdrobs14/loraNew | mdrobs14 | 2025-05-01T05:17:17Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openlm-research/open_llama_7b",
"base_model:adapter:openlm-research/open_llama_7b",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T04:40:45Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_7b
tags:
- generated_from_trainer
model-index:
- name: loraNew
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loraNew
This model is a fine-tuned version of [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4155 | 1.0 | 37 | 2.2170 |
| 2.1706 | 2.0 | 74 | 2.0614 |
| 2.0605 | 3.0 | 111 | 2.0185 |
| 2.0148 | 4.0 | 148 | 1.9970 |
| 2.0018 | 5.0 | 185 | 1.9894 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
DipanjanSanyal/cefr_a1_tiny_lm | DipanjanSanyal | 2025-05-01T05:15:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T02:07:57Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: cefr_a1_tiny_lm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cefr_a1_tiny_lm
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7626
- Bertscore Precision: 0.8417
- Bertscore Recall: 0.8420
- Bertscore F1: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore Precision | Bertscore Recall | Bertscore F1 |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|
| No log | 1.0 | 175 | 5.9907 | 0.7932 | 0.8231 | 0.8076 |
| No log | 2.0 | 350 | 5.7637 | 0.8333 | 0.8347 | 0.8338 |
| 4.9374 | 3.0 | 525 | 5.6588 | 0.8427 | 0.8399 | 0.8411 |
| 4.9374 | 4.0 | 700 | 5.6527 | 0.8373 | 0.8378 | 0.8373 |
| 4.9374 | 5.0 | 875 | 5.6739 | 0.8400 | 0.8384 | 0.8390 |
| 3.9529 | 6.0 | 1050 | 5.6938 | 0.8390 | 0.8398 | 0.8392 |
| 3.9529 | 7.0 | 1225 | 5.7255 | 0.8431 | 0.8418 | 0.8422 |
| 3.9529 | 8.0 | 1400 | 5.7639 | 0.8403 | 0.8416 | 0.8408 |
| 3.5438 | 9.0 | 1575 | 5.7559 | 0.8440 | 0.8419 | 0.8427 |
| 3.5438 | 10.0 | 1750 | 5.7626 | 0.8417 | 0.8420 | 0.8417 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
yahyaabd/sbert-bps-custom-tokenizer-en | yahyaabd | 2025-05-01T05:14:18Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-01T05:13:57Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
vermoney/1c66de24-06d4-45ad-96b5-de7bceeb15ce | vermoney | 2025-05-01T05:09:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T04:55:42Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c66de24-06d4-45ad-96b5-de7bceeb15ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f76d7fca1023a98b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f76d7fca1023a98b_train_data.json
type:
field_input: domain
field_instruction: query
field_output: api_list
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/1c66de24-06d4-45ad-96b5-de7bceeb15ce
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/f76d7fca1023a98b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4c469ccc-99bf-49e2-904b-286196c7e713
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 4c469ccc-99bf-49e2-904b-286196c7e713
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1c66de24-06d4-45ad-96b5-de7bceeb15ce
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8574 | 0.0192 | 200 | 0.6356 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kevinwang676/GPT-SoVITS-v4-new | kevinwang676 | 2025-05-01T05:08:28Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-04-29T22:24:12Z | <div align="center">
<h1>GPT-SoVITS-WebUI</h1>
A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.<br><br>
[](https://github.com/RVC-Boss/GPT-SoVITS)
<a href="https://trendshift.io/repositories/7033" target="_blank"><img src="https://trendshift.io/api/badge/repositories/7033" alt="RVC-Boss%2FGPT-SoVITS | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
<!-- img src="https://counter.seku.su/cmoe?name=gptsovits&theme=r34" /><br> -->
[](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb)
[](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE)
[](https://huggingface.co/spaces/lj1995/GPT-SoVITS-v2)
[](https://discord.gg/dnrgs5GHfG)
**English** | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md) | [**Türkçe**](./docs/tr/README.md)
</div>
---
## Features:
1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion.
2. **Few-shot TTS:** Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.
3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese.
4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!**
Unseen speakers few-shot fine-tuning demo:
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
**User guide: [简体中文](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) | [English](https://rentry.co/GPT-SoVITS-guide#/)**
## Installation
For users in China, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online.
### Tested Environments
| Python Version | PyTorch Version | Device |
|----------------|------------------|-----------------|
| Python 3.9 | PyTorch 2.0.1 | CUDA 11.8 |
| Python 3.10.13 | PyTorch 2.1.2 | CUDA 12.3 |
| Python 3.10.17 | PyTorch 2.5.1 | CUDA 12.4 |
| Python 3.9 | PyTorch 2.5.1 | Apple silicon |
| Python 3.11 | PyTorch 2.6.0 | Apple silicon |
| Python 3.9 | PyTorch 2.2.2 | CPU |
| Python 3.9 | PyTorch 2.8.0dev | CUDA12.8(for Nvidia50x0) |
### Windows
If you are a Windows user (tested with win>=10), you can [download the integrated package](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-v3lora-20250228.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI.
**Users in China can [download the package here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#KTvnO).**
### Linux
```bash
conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh --source <HF|HF-Mirror|ModelScope> [--download-uvr5]
```
### macOS
**Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.**
1. Install Xcode command-line tools by running `xcode-select --install`.
2. Install the program by running the following commands:
```bash
conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh --source <HF|HF-Mirror|ModelScope> [--download-uvr5]
```
### Install Manually
#### Install FFmpeg
##### Conda Users
```bash
conda install ffmpeg
```
##### Ubuntu/Debian Users
```bash
sudo apt install ffmpeg
sudo apt install libsox-dev
conda install -c conda-forge 'ffmpeg<7'
```
##### Windows Users
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root.
Install [Visual Studio 2017](https://aka.ms/vs/17/release/vc_redist.x86.exe) (Korean TTS Only)
##### MacOS Users
```bash
brew install ffmpeg
```
#### Install Dependences
```bash
pip install -r extra-req.txt --no-deps
pip install -r requirements.txt
```
### Using Docker
#### docker-compose.yaml configuration
0. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check [Docker Hub](https://hub.docker.com/r/breakstring/gpt-sovits)(outdated) for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
1. Environment Variables:
- is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation.
2. Volumes Configuration, The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content.
3. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation.
4. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances.
#### Running with docker compose
```
docker compose -f "docker-compose.yaml" up -d
```
#### Running with docker command
As above, modify the corresponding parameters based on your actual situation, then run the following command:
```
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
```
## Pretrained Models
**If `install.sh` runs successfully, you may skip No.1,2,3**
**Users in China can [download all these models here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#nVNhX).**
1. Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
2. Download G2PW models from [G2PWModel.zip(HF)](https://huggingface.co/XXXXRT/GPT-SoVITS-Pretrained/resolve/main/G2PWModel.zip)| [G2PWModel.zip(ModelScope)](https://www.modelscope.cn/models/XXXXRT/GPT-SoVITS-Pretrained/resolve/master/G2PWModel.zip), unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.(Chinese TTS Only)
3. For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
- If you want to use `bs_roformer` or `mel_band_roformer` models for UVR5, you can manually download the model and corresponding configuration file, and put them in `tools/uvr5/uvr5_weights`. **Rename the model file and configuration file, ensure that the model and configuration files have the same and corresponding names except for the suffix**. In addition, the model and configuration file names **must include `roformer`** in order to be recognized as models of the roformer class.
- The suggestion is to **directly specify the model type** in the model name and configuration file name, such as `mel_mand_roformer`, `bs_roformer`. If not specified, the features will be compared from the configuration file to determine which type of model it is. For example, the model `bs_roformer_ep_368_sdr_12.9628.ckpt` and its corresponding configuration file `bs_roformer_ep_368_sdr_12.9628.yaml` are a pair, `kim_mel_band_roformer.ckpt` and `kim_mel_band_roformer.yaml` are also a pair.
4. For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/asr/models`.
5. For English or Japanese ASR (additionally), download models from [Faster Whisper Large V3](https://huggingface.co/Systran/faster-whisper-large-v3) and place them in `tools/asr/models`. Also, [other models](https://huggingface.co/Systran) may have the similar effect with smaller disk footprint.
## Dataset Format
The TTS annotation .list file format:
```
vocal_path|speaker_name|language|text
```
Language dictionary:
- 'zh': Chinese
- 'ja': Japanese
- 'en': English
- 'ko': Korean
- 'yue': Cantonese
Example:
```
D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.
```
## Finetune and inference
### Open WebUI
#### Integrated Package Users
Double-click `go-webui.bat`or use `go-webui.ps1`
if you want to switch to V1,then double-click`go-webui-v1.bat` or use `go-webui-v1.ps1`
#### Others
```bash
python webui.py <language(optional)>
```
if you want to switch to V1,then
```bash
python webui.py v1 <language(optional)>
```
Or maunally switch version in WebUI
### Finetune
#### Path Auto-filling is now supported
1. Fill in the audio path
2. Slice the audio into small chunks
3. Denoise(optinal)
4. ASR
5. Proofreading ASR transcriptions
6. Go to the next Tab, then finetune the model
### Open Inference WebUI
#### Integrated Package Users
Double-click `go-webui-v2.bat` or use `go-webui-v2.ps1` ,then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference`
#### Others
```bash
python GPT_SoVITS/inference_webui.py <language(optional)>
```
OR
```bash
python webui.py
```
then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference`
## V2 Release Notes
New Features:
1. Support Korean and Cantonese
2. An optimized text frontend
3. Pre-trained model extended from 2k hours to 5k hours
4. Improved synthesis quality for low-quality reference audio
[more details](<https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v2%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)>)
Use v2 from v1 environment:
1. `pip install -r requirements.txt` to update some packages
2. Clone the latest codes from github.
3. Download v2 pretrained models from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main/gsv-v2final-pretrained) and put them into `GPT_SoVITS\pretrained_models\gsv-v2final-pretrained`.
Chinese v2 additional: [G2PWModel.zip(HF)](https://huggingface.co/XXXXRT/GPT-SoVITS-Pretrained/resolve/main/G2PWModel.zip)| [G2PWModel.zip(ModelScope)](https://www.modelscope.cn/models/XXXXRT/GPT-SoVITS-Pretrained/resolve/master/G2PWModel.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.)
## V3 Release Notes
New Features:
1. The timbre similarity is higher, requiring less training data to approximate the target speaker (the timbre similarity is significantly improved using the base model directly without fine-tuning).
2. GPT model is more stable, with fewer repetitions and omissions, and it is easier to generate speech with richer emotional expression.
[more details](<https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v3v4%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)>)
Use v3 from v2 environment:
1. `pip install -r requirements.txt` to update some packages
2. Clone the latest codes from github.
3. Download v3 pretrained models (s1v3.ckpt, s2Gv3.pth and models--nvidia--bigvgan_v2_24khz_100band_256x folder) from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) and put them into `GPT_SoVITS\pretrained_models`.
additional: for Audio Super Resolution model, you can read [how to download](./tools/AP_BWE_main/24kto48k/readme.txt)
## V4 Release Notes
New Features:
1. Version 4 fixes the issue of metallic artifacts in Version 3 caused by non-integer multiple upsampling, and natively outputs 48k audio to prevent muffled sound (whereas Version 3 only natively outputs 24k audio). The author considers Version 4 a direct replacement for Version 3, though further testing is still needed.
[more details](<https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v3v4%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)>)
Use v4 from v1/v2/v3 environment:
1. `pip install -r requirements.txt` to update some packages
2. Clone the latest codes from github.
3. Download v4 pretrained models (gsv-v4-pretrained/s2v4.ckpt, and gsv-v4-pretrained/vocoder.pth) from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) and put them into `GPT_SoVITS\pretrained_models`.
## Todo List
- [x] **High Priority:**
- [x] Localization in Japanese and English.
- [x] User guide.
- [x] Japanese and English dataset fine tune training.
- [ ] **Features:**
- [x] Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
- [x] TTS speaking speed control.
- [ ] ~~Enhanced TTS emotion control.~~ Maybe use pretrained finetuned preset GPT models for better emotion.
- [ ] Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent).
- [x] Improve English and Japanese text frontend.
- [ ] Develop tiny and larger-sized TTS models.
- [x] Colab scripts.
- [x] Try expand training dataset (2k hours -> 10k hours).
- [x] better sovits base model (enhanced audio quality)
- [ ] model mix
## (Additional) Method for running from the command line
Use the command line to open the WebUI for UVR5
```
python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>
```
<!-- If you can't open a browser, follow the format below for UVR processing,This is using mdxnet for audio processing
```
python mdxnet.py --model --input_root --output_vocal --output_ins --agg_level --format --device --is_half_precision
``` -->
This is how the audio segmentation of the dataset is done using the command line
```
python audio_slicer.py \
--input_path "<path_to_original_audio_file_or_directory>" \
--output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \
--threshold <volume_threshold> \
--min_length <minimum_duration_of_each_subclip> \
--min_interval <shortest_time_gap_between_adjacent_subclips>
--hop_size <step_size_for_computing_volume_curve>
```
This is how dataset ASR processing is done using the command line(Only Chinese)
```
python tools/asr/funasr_asr.py -i <input> -o <output>
```
ASR processing is performed through Faster_Whisper(ASR marking except Chinese)
(No progress bars, GPU performance may cause time delays)
```
python ./tools/asr/fasterwhisper_asr.py -i <input> -o <output> -l <language> -p <precision>
```
A custom list save path is enabled
## Credits
Special thanks to the following projects and contributors:
### Theoretical Research
- [ar-vits](https://github.com/innnky/ar-vits)
- [SoundStorm](https://github.com/yangdongchao/SoundStorm/tree/master/soundstorm/s1/AR)
- [vits](https://github.com/jaywalnut310/vits)
- [TransferTTS](https://github.com/hcy71o/TransferTTS/blob/master/models.py#L556)
- [contentvec](https://github.com/auspicious3000/contentvec/)
- [hifi-gan](https://github.com/jik876/hifi-gan)
- [fish-speech](https://github.com/fishaudio/fish-speech/blob/main/tools/llama/generate.py#L41)
- [f5-TTS](https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/model/backbones/dit.py)
- [shortcut flow matching](https://github.com/kvfrans/shortcut-models/blob/main/targets_shortcut.py)
### Pretrained Models
- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large)
- [BigVGAN](https://github.com/NVIDIA/BigVGAN)
### Text Frontend for Inference
- [paddlespeech zh_normalization](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/zh_normalization)
- [split-lang](https://github.com/DoodleBears/split-lang)
- [g2pW](https://github.com/GitYCC/g2pW)
- [pypinyin-g2pW](https://github.com/mozillazg/pypinyin-g2pW)
- [paddlespeech g2pw](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/g2pw)
### WebUI Tools
- [ultimatevocalremovergui](https://github.com/Anjok07/ultimatevocalremovergui)
- [audio-slicer](https://github.com/openvpi/audio-slicer)
- [SubFix](https://github.com/cronrpc/SubFix)
- [FFmpeg](https://github.com/FFmpeg/FFmpeg)
- [gradio](https://github.com/gradio-app/gradio)
- [faster-whisper](https://github.com/SYSTRAN/faster-whisper)
- [FunASR](https://github.com/alibaba-damo-academy/FunASR)
- [AP-BWE](https://github.com/yxlu-0102/AP-BWE)
Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge.
## Thanks to all contributors for their efforts
<a href="https://github.com/RVC-Boss/GPT-SoVITS/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=RVC-Boss/GPT-SoVITS" />
</a>
|
18-Shah-Sapna-Kumari-Viral-Video/Full.Clip.Sapna.Shah.Viral.Video.Original.Link.Trending | 18-Shah-Sapna-Kumari-Viral-Video | 2025-05-01T05:08:06Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T05:01:52Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/yrv67ytk?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Shah Sapna Kumari viral video trending across platforms like YouTube and social media. Here’s what you need to know in 2025. We break down the facts, the timeline, and clear up the misinformation. Who is Shah Sapna Kumari? What’s the video really about? And why is it going viral? Stay informed with verified updates, public reactions, and a responsible take
|
joboffer/adf286e3-4506-4f69-8f24-7b9a9692a008 | joboffer | 2025-05-01T05:07:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T04:54:12Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: adf286e3-4506-4f69-8f24-7b9a9692a008
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f76d7fca1023a98b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f76d7fca1023a98b_train_data.json
type:
field_input: domain
field_instruction: query
field_output: api_list
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/adf286e3-4506-4f69-8f24-7b9a9692a008
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/f76d7fca1023a98b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4c469ccc-99bf-49e2-904b-286196c7e713
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 4c469ccc-99bf-49e2-904b-286196c7e713
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# adf286e3-4506-4f69-8f24-7b9a9692a008
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8574 | 0.0192 | 200 | 0.6357 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MandelBarnard/MandelBarnard | MandelBarnard | 2025-05-01T05:07:42Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-01T05:07:42Z | ---
license: creativeml-openrail-m
---
|
NikolaiRaitschew/q5_30_04 | NikolaiRaitschew | 2025-05-01T05:07:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T05:06:10Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** NikolaiRaitschew
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shibajustfor/ff9d6e30-39e1-45b9-af07-7eebbfa91a2a | shibajustfor | 2025-05-01T05:05:15Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"region:us"
] | null | 2025-05-01T05:04:19Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Yarn-Solar-10b-64k
model-index:
- name: shibajustfor/ff9d6e30-39e1-45b9-af07-7eebbfa91a2a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/ff9d6e30-39e1-45b9-af07-7eebbfa91a2a
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
gaianet/Qwen3-4B-GGUF | gaianet | 2025-05-01T05:02:29Z | 355 | 0 | null | [
"gguf",
"qwen3",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T13:44:03Z | ---
base_model: Qwen/Qwen3-4B
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen3-4B
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen3-4B-GGUF
## Original Model
[Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
## Run with Gaianet
**Prompt template**
prompt template:
- `chatml` (for thinking)
- `qwen3-no-think` (for no thinking)
**Context size**
chat_ctx_size: `128000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b5097* |
gaianet/Qwen3-0.6B-GGUF | gaianet | 2025-05-01T05:02:17Z | 265 | 0 | null | [
"gguf",
"qwen3",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T13:14:25Z | ---
base_model: Qwen/Qwen3-0.6B
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen3-0.6B
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen3-0.6B-GGUF
## Original Model
[Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
## Run with Gaianet
**Prompt template**
prompt template:
- `chatml` (for thinking)
- `qwen3-no-think` (for no thinking)
**Context size**
chat_ctx_size: `128000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b5097* |
gaianet/Qwen3-1.7B-GGUF | gaianet | 2025-05-01T05:02:05Z | 161 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation",
"chat",
"en",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T23:13:52Z | ---
base_model: Qwen/Qwen3-1.7B
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen3-1.7B
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
library_name: transformers
---
# Qwen3-1.7B-GGUF
## Original Model
[Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B)
## Run with Gaianet
**Prompt template**
prompt template:
- `chatml` (for thinking)
- `qwen3-no-think` (for no thinking)
**Context size**
chat_ctx_size: `128000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b5097* |
mradermacher/pavement7bv1-GGUF | mradermacher | 2025-05-01T05:00:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:newchangertech/pavement7bv1",
"base_model:quantized:newchangertech/pavement7bv1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T04:49:35Z | ---
base_model: newchangertech/pavement7bv1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/newchangertech/pavement7bv1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/pavement7bv1-GGUF/resolve/main/pavement7bv1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jxjessieli/llama-3.1_wildchat20k_5e-7 | jxjessieli | 2025-05-01T04:59:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T04:50:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/qwen3-conscious-fullmodel-GGUF | mradermacher | 2025-05-01T04:57:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Guilherme34/qwen3-conscious-fullmodel",
"base_model:quantized:Guilherme34/qwen3-conscious-fullmodel",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T04:51:31Z | ---
base_model: Guilherme34/qwen3-conscious-fullmodel
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Guilherme34/qwen3-conscious-fullmodel
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/qwen3-conscious-fullmodel-GGUF/resolve/main/qwen3-conscious-fullmodel.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Alfa-v2-GGUF | mradermacher | 2025-05-01T04:51:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:aydndglr/Alfa-v2",
"base_model:quantized:aydndglr/Alfa-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T04:39:30Z | ---
base_model: aydndglr/Alfa-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/aydndglr/Alfa-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Alfa-v2-GGUF/resolve/main/Alfa-v2.f16.gguf) | f16 | 2.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ayoub-66/mbart-vending-error-model | ayoub-66 | 2025-05-01T04:50:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-01T04:48:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jxjessieli/mistral_simple20k_1e-6 | jxjessieli | 2025-05-01T04:50:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T11:06:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abhinandan005/autotrain-1vtsr-xes2p | abhinandan005 | 2025-05-01T04:49:38Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-regression",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"doi:10.57967/hf/5311",
"region:us"
] | null | 2025-05-01T04:41:19Z |
---
tags:
- autotrain
- text-regression
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Regression
## Validation Metrics
loss: 0.031481023877859116
mse: 0.031481023877859116
mae: 0.15383689105510712
r2: 0.004698693752288818
rmse: 0.1774289262715049
explained_variance: 0.02726966142654419
|
JunSotohigashi/easy-fire-148 | JunSotohigashi | 2025-05-01T04:49:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T04:48:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
second-state/Qwen3-14B-GGUF | second-state | 2025-05-01T04:47:41Z | 432 | 1 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation",
"chat",
"en",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T23:03:05Z | ---
base_model: Qwen/Qwen3-14B
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen3-14B
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
library_name: transformers
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Qwen3-14B-GGUF
## Original Model
[Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B)
## Run with LlamaEdge
- LlamaEdge version:
- Thinking: [v0.17.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.17.0) and above
- No Thinking: [v0.18.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.18.2)
- Prompt template
- Prompt type: `chatml` (for thinking)
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Prompt type: `qwen3-no-think` (for no thinking)
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message_1}<|im_end|>
<|im_start|>assistant
{assistant_message_1}<|im_end|>
<|im_start|>user
{user_message_2}<|im_end|>
<|im_start|>assistant
<think>
</think>
```
- Context size: `128000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen3-14B-Q5_K_M.gguf \
llama-api-server.wasm \
--model-name Qwen3-14B \
--prompt-template chatml \
--ctx-size 128000
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen3-14B-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template chatml \
--ctx-size 128000
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Qwen3-14B-Q2_K.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q2_K.gguf) | Q2_K | 2 | 5.75 GB| smallest, significant quality loss - not recommended for most purposes |
| [Qwen3-14B-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q3_K_L.gguf) | Q3_K_L | 3 | 7.90 GB| small, substantial quality loss |
| [Qwen3-14B-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q3_K_M.gguf) | Q3_K_M | 3 | 7.32 GB| very small, high quality loss |
| [Qwen3-14B-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q3_K_S.gguf) | Q3_K_S | 3 | 6.66 GB| very small, high quality loss |
| [Qwen3-14B-Q4_0.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q4_0.gguf) | Q4_0 | 4 | 8.52 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen3-14B-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q4_K_M.gguf) | Q4_K_M | 4 | 9.00 GB| medium, balanced quality - recommended |
| [Qwen3-14B-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q4_K_S.gguf) | Q4_K_S | 4 | 8.57 GB| small, greater quality loss |
| [Qwen3-14B-Q5_0.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q5_0.gguf) | Q5_0 | 5 | 10.3 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen3-14B-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q5_K_M.gguf) | Q5_K_M | 5 | 10.5 GB| large, very low quality loss - recommended |
| [Qwen3-14B-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q5_K_S.gguf) | Q5_K_S | 5 | 10.3 GB| large, low quality loss - recommended |
| [Qwen3-14B-Q6_K.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q6_K.gguf) | Q6_K | 6 | 12.1 GB| very large, extremely low quality loss |
| [Qwen3-14B-Q8_0.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-Q8_0.gguf) | Q8_0 | 8 | 15.7 GB| very large, extremely low quality loss - not recommended |
| [Qwen3-14B-f16.gguf](https://huggingface.co/second-state/Qwen3-14B-GGUF/blob/main/Qwen3-14B-f16.gguf) | f16 | 16 | 29.5 GB| |
*Quantized with llama.cpp b5097* |
bnkc123/obscura-v1 | bnkc123 | 2025-05-01T04:46:42Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:909",
"loss:MatryoshkaLoss",
"loss:TripletLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1703.07737",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-01T03:32:52Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:909
- loss:MatryoshkaLoss
- loss:TripletLoss
base_model: BAAI/bge-large-en-v1.5
widget:
- source_sentence: What does Safeco define as 'permanently attached living quarters'
for a motor home?
sentences:
- 'south dakota * rv program rules back to table of contents safeco insurance company
of america 7 program rules this section details the types of vehicles permitted
in the rv program and the rules for writing these vehicles. rv includes motor
homes, travel trailers, sport travel trailers (includes horse trailers with living/sleeping
quarters), fifth wheel trailers, folding camping trailers, truck mounted campers,
and utility and horse trailers. rvs insured with safeco are intended for personal/
recreational use for up to 250 days per year. business and commercial activities
are not permitted. motor homes we consider "motor homes" to be self-propelled
mobile homes (including pickups or vans with permanently attached living quarters
and used for recreational purposes) and provide temporary living quarters. "permanently
attached living quarters" means: cooking, refrigerator, bathroom with plumbing,
self contained heating and/or cooling, 110-125 electric supply and drinkable water
supply. motor homes are not eligible for, and do not make other vehicles eligible
for, account discount, distant student discount or good student discount. motor
homes that carry liability coverage are not eligible for, but do extend, the multi-car
discount to a private passenger auto. damage to your covered vehicle comprehensive
and collision coverages are rated on the basis of actual cash value/current market
value. travel trailers and camping trailers we consider "trailers" to be non-motorized
vehicles that are intended to be towed and used for recreational purposes and
are equipped with a living quarters, with the exception of horse and utility trailers.
only physical damage coverage (comprehensive, collision or both) is available
for trailers.'
- 'south dakota * rv rv types back to table of contents safeco insurance company
of america 6 rv types permitted rv types the recreational vehicle types shown
below are a general representation of the vehicles written in safeco''s rv program.
in addition to the vehicles shown below, utility trailers and horse trailers are
also permitted in the rv program. see the program rules for vehicle type requirements.
. class a - motor home class b - motor home class c - motor home truck mounted
camper conventional trailer sport travel trailer fifth-wheel trailer folding camping
trailer source: recreational vehicle industry association (rvia), 01/06'
- the surcharge will depend on the conviction or reason for filing. if a risk appears
acceptable, submit on an unbound basis. include a copy of the official notice
requesting a certificate. surcharges will be calculated by the home office. application
of surcharge risks for whom we agree to issue a filing of financial responsibility
will be surcharged under this rule as well as under the ddp, if applicable. the
erie must certify that the driver is covered under every circumstance. there are
some nonowned situations excluded by personal auto policy language. in these cases,
the exclusions must be removed by endorsement and proper premiums applied. apply
surcharges as shown below to the auto principally driven by the person for whom
this filing is issued. in the case of named non- owner coverage, apply the surcharge
to the premium obtained from rule 21, named non- owner coverage, in these rules
pages. surcharge table apply only during the period of time that the certificate
is required. certified risk surcharge table conviction surcharge driving while
under the influence of drugs or alcohol. 50% failing to stop and report when involved
in an accident. 50% assault or a felony involving a motor vehicle. 50% speeding
that resulted in bi, pd or csl. 25% reckless driving that resulted in bi, pd or
csl. 25% all other cases requiring financial responsibility filings. 5% private
passenger auto va rules erie insurance exchange erie insurance company 5 effective
12/1/19
- source_sentence: Which broad form endorsement in Alabama extends coverage to employees
who are not subject to the workers compensation law?
sentences:
- workers' compensation and employers' liability alabama effective september 1,
2012 table of contents section title page number broad form coverage.........................................................1
association, franchise or safety group schedule rating...........4 managed care
premium credit rating plan............................5 large deductible program...................................................11
schedule rating plan.........................................................16
large risk alternative rating option.....................................18 waiver
of our right to recover from others............................19 negotiated rating
option plan.............................................20 total account billing
system (tabs)....................................21 participating dividend plans...............................................32
guaranteed cost large risk alternative rating option............59
- 'i. personal effects coverage 1. coverage for personal effects is provided for
recreational vehicles at basic limits with no additional charge when comprehensive
and collision coverages are provided. refer to the policy provisions for the full
extent of coverage and any limitations. 2. coverage is not provided and is not
available when comprehensive and collision coverages are not provided. -------------------------------------------------------------------------------------
print date: 12/10/98 3 rule section 15 -------------------------------------------------------------------------------------'
- 'workers'' compensation and employers'' liability rule manual alabama effective
april 15, 1997 page 1 broad form coverage a. forms wc 99 03 00 and wc 99 03 01
are optional endorsements developed for use with the workers compensation and
employers liability policy. only one of these forms may be attached to a policy.
the workers compensation broad form endorsement wc 99 03 00 changes the policy
as follows: 1. we will also pay as part of any claim, proceeding or suit we defend
we will pay for reasonable expenses incurred at our request, including loss of
earnings. loss of earnings is not covered by the standard workers compensation
policy. 2. other states insurance the standard policy states that if a risk has
work on the effective date of the policy in any state not listed in item 3.a.
of the information page coverage will not be afforded unless we are notified within
30 days. the reporting requirement is extended to 60 days. 3. transfer of your
rights and duties if an insured dies and we receive notice within 30 days after
death, we will cover the legal representative as insured. the reporting requirement
is extended to 60 days. 4. cancellation the standard policy requires 10 days advance
notice of cancellation unless the law requires otherwise. the notice requirement
is extended to 15 days. 5. liberalization if a change is adopted in the form that
would broaden the coverage without extra charge, the broader coverage will apply
to this policy.'
- source_sentence: What coverage reimburses the outstanding loan balance beyond the
actual cash value for a newly financed car?
sentences:
- 'personal lines auto rule manual safety pays auto program page 13.3 of 3 07/01/16
ids property casualty insurance company california i. new car replacement coverage
policies providing physical damage coverage (comprehensive and collision) may
be endorsed to include coverage for the difference between the actual cash value
and cost of a new auto of the same make and model. the rates for such coverage
can be found in the rate manual. additional provisions: 1. coverage is only applicable
to new automobiles not previously titled by a state. 2. new car replacement coverage
must be requested by the insured within a 30 day period following the purchase
of a new automobile. 3. only vehicles that are classified and rated as private
passenger vehicles and 4 wheel vehicles having a load capacity of 1,500 pounds
or less are subject to the provision of this rule. 4. only vehicles with 1000
miles or less at the time of purchase. h. new car expanded protection coverages
(new car replacement / gap) policies providing physical damage coverage (comprehensive
and collision) may be endorsed to include coverage for the difference between
the actual cash value and the outstanding indebtedness on a loan taken out by
the insured to finance the purchase of a new automobile. the rates for such coverage
can be found in the rate manual. additional provisions: 1. coverage is only applicable
to new automobiles not previously titled by a state. 2. new car replacement /
gap coverage must be requested by the insured within a 30 day period following
the purchase of a new automobile. 3. only vehicles that are classified and rated
as private passenger vehicles and 4 wheel vehicles having a load capacity of 1,500
pounds or less are subject to the provision of this rule. 4. only vehicles with
1000 miles or less at the time of purchase.'
- a revised symbol will be assigned if the value of customization with the msrp
is greater than the msrp range associated with the originally assigned symbol.
refer to the price/symbol chart located at the end of this manual. for purposes
of this rule, customization refers to interior or exterior alteration designed
to personalize or better facilitate use of the vehicle for non-business purposes
and specifically includes elaborate interior furnishings and exterior paint, glass
and body modifications. customization, however, does not include equipment commonly
installed on these vehicles such as heater, air conditioning, tires, customary
music options, power steering and power brakes, nor modifications designed to
increase the usefulness of the vehicle for business purposes.
- michigan manufactured homes manual rules 12-01-04 allstate indemnity company page
imh2-1 rule 2 - eligibility this manual is applicable to manufactured homes which
are used exclusively for residential purposes. their appurtenant private structures
are also covered if they are not used for commercial or farm purposes. coverage
for the personal effects of the occupants is also provided. trailers used extensively
for travel purposes are not eligible.
- source_sentence: Which named perils specifically apply to personal property (Coverage
C) in a standard renters or homeowners policy issued by Pacific Specialty?
sentences:
- 'pacific specialty insurance company ct - ho3/4/6 superior - man (ed. 10-23) page
9 10. losses insured below is a brief description of the losses insured (please
refer to the policy for a complete description of the coverage): a. section i
- property coverages damage to insured''s property is covered under the property
coverages section of the policy. for the following coverages, coverage is provided
for direct physical loss with certain exclusions: * coverage a - dwelling (not
applicable to renters) * coverage b - other structures (not applicable to renters
or unit-owners) * coverage d - loss of use coverage c (personal property) provides
for direct loss caused by the following named perils, unless excluded and/or excepted
in the policy: 1. fire or lightning 2. windstorm or hail 3. explosion 4. riot
or civil commotion 5. aircraft, including self-propelled missiles and spacecraft
6. vehicles 7. sudden and accidental damage from smoke 8. vandalism or malicious
mischief 9. theft 10. falling objects 11. weight of ice, snow or sleet 12. accidental
discharge or overflow of water or steam 13. sudden and accidental tearing apart,
cracking, burning or bulging of water heater, etc. 14. freezing of plumbing, heating,
air conditioning or automatic fire protective sprinkler system, etc. 15. sudden
and accidental damage from artificially generated electrical current 16. volcanic
eruption b. section ii - liability coverages section ii liability includes coverage
for bodily injury or property damage caused by an occurrence and defense costs
associated with a suit brought against an insured for a covered claim.'
- 'acadia insurance company continental western insurance company firemen''s insurance
company of washington, d.c. union insurance company division one - commercial
automobile company exception pages - alabama effective12/1/2024 al - ca - 7 revised
5/13/2024 includes copyrighted material of iso, inc. with its permission classify
trucks, tractors and trailers for liability and physical damage coverages as follows:
a. primary classifications 1. vehicle weight gross vehicle weight rating (gvwr)
and gross combination weight (gcw) mean: a. gross vehicle weight rating the maximum
loaded weight for which a single auto is designed, as specified by the manufacturer.
b. gross combination weight the maximum loaded weight for a combination truck-tractor
and semitrailer or trailer for which the truck-tractor is designed, as specified
by the manufacturer. 2. size class if a bus is rated at truck, tractor or trailer
rates, determine the size class from the seating capacity as follows: seating
capacity size class 1 - 8 light 9 - 20 medium 21 - 60 heavy over 60 extra-heavy
table 223.a.2. size class for buses rated as trucks otherwise: a. light trucks
trucks that have a gross vehicle weight rating (gvwr) of 10,000 pounds or less.
b. medium trucks trucks that have a gross vehicle weight rating (gvwr) of 10,001
- 20,000 pounds. c. heavy trucks trucks that have a gross vehicle weight rating
(gvwr) of 20,001 - 45,000 pounds. d. extra-heavy trucks trucks that have a gross
vehicle weight rating (gvwr) over 45,000 pounds. e. truck-tractors a truck-tractor
is a motorized auto with or without body for carrying commodities or materials,
equipped with a fifth-wheel coupling device for semitrailers.'
- 'b. renters policy 1. the tenant of any dwelling, apartment, condominium or cooperative
unit. 2. the titled owner, who is also an occupant, of a dwelling or building
containing an apartment that is not eligible for another homeowners form. 3. the
titled owner of a cooperative unit, provided: a. the portion of the premises occupied
as living quarters is used principally for private residential purposes. b. this
portion is occupied by only one family and cannot have more than two roomers or
boarders. c. this portion is designated by an apartment number or other positive
identification. c. unit-owners policy owner occupied, including seasonal, and
tenant occupied units, which are part of a community association organized under
condominium, cooperative, town house or planned development form of ownership
and where provision has been made for a master policy cov ering the residential
building(s) real property exposure. the unit must be used principally for private
residential purposes. note: the term "owner" includes persons purchasing a dwelling,
such as under a mortgage agreement or contract of sale, and must be listed on
the deed of property to be named insured.'
- source_sentence: Which additional coverage table shows a $1,000 limit for identity
theft on a standard policy but $10,000 for Platinum and GrandProtect?
sentences:
- 'farmers lloyd''s insurance company of texas texas residential property manual
updated: may, 2020 page 3 section i - additional coverages additional coverages
ho-2 homeowners, homeowners, market value, mobile homeowners, renters, condominium
and landlord''s rental platinum and grandprotect products (includes homeowners,
renters and condominium) loss of use additional living expense or fair rental
value and loss of rental income increased limits available prohibited use refer
to rule 2 yes up to 14 days refer to rule 2 yes for platinum up to 45 days debris
removal 10% 10% reasonable repairs yes yes fire department charges $750 $1000
emergency removal of property 30 days 30 days emergency living expense $500 $500
refrigerated contents $1000 $1500 identity theft and credit protection (cov. 9)
increased limits available $1000 yes $10,000 no data and records $1500 for personal
none for business $2500 lock replacement yes yes reward coverage $5000 $5000 trees,
shrubs and plants (coverage 12) increased limits available $500 per item/ 5% aggregate
yes $500 per item/ 5% aggregate yes loss assessment (coverage 6) increased limits
available $1000 yes $10,000 yes land $10,000 $10,000 volcanic action yes yes collapse
yes yes inflation protection yes yes landlord''s furnishings $2500 $2500 fungus
and mold remediation $5000 $5000 backup of sewer, drain and sump pump (coverage
13) optional $10,000 increased limits available newly acquired watercraft n/a
with grandprotect identity fraud n/a with grandprotect ordinance or law (coverage
15) optional grandprotect - blank property limit platinum - 50% of cov.'
- a increased limits available section ii - additional coverages additional coverages
ho-2 homeowners, homeowners, market value, mobile homeowners, renters, condominium
and landlord's rental platinum and grandprotect products (includes homeowners,
renters and condominium) damage of property of others $500 $1500 claim expenses
yes, including $200 for lost wages yes, including $250 for lost wages first aid
expenses yes yes borrowed or rented watercraft n/a with grandprotect personal
injury (coverage 25) optional included
- 'f. physical damage coverage (comprehensive and collision coverages) the policy
may provide comprehensive on a full coverage basis or on a $50, $100, $200, $250,
$300, $500, $1,000, $2,000, $2,500 or $5,000 deductible basis and collision on
a $50, $100, $150, $200, $250, $300, $500, $1,000, $2,000, $2,500 or $5,000 deductible
basis. towing must be purchased when comprehensive is purchased. collision coverage
may not be purchased without comprehensive coverage. also included in the physical
damage coverage are: i. towing and labor costs up to $50 per disablement (refer
to the towing coverage rule for additional limits); ii. transportation cost to
intended destination up to $50 per occurrence, iii. loss of clothes and luggage
up to $300 per occurrence, iv. rental reimbursement not exceeding $25 a day or
$750 if loss by theft, and v. general average and salvage charges due to transporting
the automobile. the deductible savings benefit (dsb) accumulates $50 to the policy
at each anniversary if no claim has been made in the past year. this benefit is
subject to a maximum of $250. the dsb amount reduces the deductible at the time
of a collision or comprehensive claim. the deductible on comprehensive may be
eliminated for safety glass breakage. refer to the rate pages for the applicable
factors. refer to the rate pages to determine the applicable premium charge.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE large Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 1024
type: dim_1024
metrics:
- type: cosine_accuracy@1
value: 0.14705882352941177
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3333333333333333
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4117647058823529
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5294117647058824
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14705882352941177
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.11111111111111112
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08235294117647057
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05294117647058822
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.14705882352941177
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3333333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4117647058823529
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5294117647058824
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.32548811551247103
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.262037037037037
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.26987000260943805
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.14705882352941177
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.30392156862745096
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4117647058823529
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5098039215686274
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14705882352941177
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.10130718954248366
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08235294117647057
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05098039215686273
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.14705882352941177
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.30392156862745096
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4117647058823529
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5098039215686274
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3130720788269893
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.2518635231870526
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2606999758024067
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.12745098039215685
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3235294117647059
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4117647058823529
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.49019607843137253
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.12745098039215685
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.10784313725490197
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08235294117647057
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.04901960784313724
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.12745098039215685
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3235294117647059
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4117647058823529
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.49019607843137253
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3020923874535027
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.24276766262060376
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.25222170335142596
name: Cosine Map@100
---
# BGE large Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) <!-- at revision d4aa6901d3a41ba39fb536a557fa166f842b0e09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("bnkc123/obscura-v1")
# Run inference
sentences = [
'Which additional coverage table shows a $1,000 limit for identity theft on a standard policy but $10,000 for Platinum and GrandProtect?',
"farmers lloyd's insurance company of texas texas residential property manual updated: may, 2020 page 3 section i - additional coverages additional coverages ho-2 homeowners, homeowners, market value, mobile homeowners, renters, condominium and landlord's rental platinum and grandprotect products (includes homeowners, renters and condominium) loss of use additional living expense or fair rental value and loss of rental income increased limits available prohibited use refer to rule 2 yes up to 14 days refer to rule 2 yes for platinum up to 45 days debris removal 10% 10% reasonable repairs yes yes fire department charges $750 $1000 emergency removal of property 30 days 30 days emergency living expense $500 $500 refrigerated contents $1000 $1500 identity theft and credit protection (cov. 9) increased limits available $1000 yes $10,000 no data and records $1500 for personal none for business $2500 lock replacement yes yes reward coverage $5000 $5000 trees, shrubs and plants (coverage 12) increased limits available $500 per item/ 5% aggregate yes $500 per item/ 5% aggregate yes loss assessment (coverage 6) increased limits available $1000 yes $10,000 yes land $10,000 $10,000 volcanic action yes yes collapse yes yes inflation protection yes yes landlord's furnishings $2500 $2500 fungus and mold remediation $5000 $5000 backup of sewer, drain and sump pump (coverage 13) optional $10,000 increased limits available newly acquired watercraft n/a with grandprotect identity fraud n/a with grandprotect ordinance or law (coverage 15) optional grandprotect - blank property limit platinum - 50% of cov.",
"a increased limits available section ii - additional coverages additional coverages ho-2 homeowners, homeowners, market value, mobile homeowners, renters, condominium and landlord's rental platinum and grandprotect products (includes homeowners, renters and condominium) damage of property of others $500 $1500 claim expenses yes, including $200 for lost wages yes, including $250 for lost wages first aid expenses yes yes borrowed or rented watercraft n/a with grandprotect personal injury (coverage 25) optional included",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_1024`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1471 |
| cosine_accuracy@3 | 0.3333 |
| cosine_accuracy@5 | 0.4118 |
| cosine_accuracy@10 | 0.5294 |
| cosine_precision@1 | 0.1471 |
| cosine_precision@3 | 0.1111 |
| cosine_precision@5 | 0.0824 |
| cosine_precision@10 | 0.0529 |
| cosine_recall@1 | 0.1471 |
| cosine_recall@3 | 0.3333 |
| cosine_recall@5 | 0.4118 |
| cosine_recall@10 | 0.5294 |
| **cosine_ndcg@10** | **0.3255** |
| cosine_mrr@10 | 0.262 |
| cosine_map@100 | 0.2699 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1471 |
| cosine_accuracy@3 | 0.3039 |
| cosine_accuracy@5 | 0.4118 |
| cosine_accuracy@10 | 0.5098 |
| cosine_precision@1 | 0.1471 |
| cosine_precision@3 | 0.1013 |
| cosine_precision@5 | 0.0824 |
| cosine_precision@10 | 0.051 |
| cosine_recall@1 | 0.1471 |
| cosine_recall@3 | 0.3039 |
| cosine_recall@5 | 0.4118 |
| cosine_recall@10 | 0.5098 |
| **cosine_ndcg@10** | **0.3131** |
| cosine_mrr@10 | 0.2519 |
| cosine_map@100 | 0.2607 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1275 |
| cosine_accuracy@3 | 0.3235 |
| cosine_accuracy@5 | 0.4118 |
| cosine_accuracy@10 | 0.4902 |
| cosine_precision@1 | 0.1275 |
| cosine_precision@3 | 0.1078 |
| cosine_precision@5 | 0.0824 |
| cosine_precision@10 | 0.049 |
| cosine_recall@1 | 0.1275 |
| cosine_recall@3 | 0.3235 |
| cosine_recall@5 | 0.4118 |
| cosine_recall@10 | 0.4902 |
| **cosine_ndcg@10** | **0.3021** |
| cosine_mrr@10 | 0.2428 |
| cosine_map@100 | 0.2522 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 909 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 909 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 25.39 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 307.94 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 263.84 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>For a wood shake or shingle roof in good condition, what is the maximum age allowed for Replacement Cost coverage on a DP-3 policy?</code> | <code>roofing systems in fair condition do not qualify for replacement cost coverage. roofing systems in poor condition will have coverage for the roofing system limited to fire and lightning only regardless of age. roofing system age / condition of the roof system excellent condition good condition asphalt / composition 1-22 1-15 slate 1-35 1-28 metal 1-56 1-34 flat/built-up/roll n/a n/a tile 1-35 1-28 wood shake / shingle 1-13 1-8</code> | <code>aegis california secondary residence insurance program 29 dp-man-ca (ed. 8) 16. roofs a. roof age the signed application will specifically disclose the age of the roof. the age of the roof is determined by subtracting the year the roof was installed from the year that the policy takes effect. the roof age will be updated manually at each policy renewal. if the roof age is updated or changed due to roof replacement, a copy of evidence (e.g. - copy of roof manufacturer's warranty indicating replacement date, copy of roof age disclosure statement from real estate transaction, receipt from roofing contractor) showing the date the roof was replaced must be submitted to the company. b. roof system type acceptable roof systems are as follows: 1. asphalt / composition - includes: (a) asphalt - shingle (fiberglass) (b) asphalt - shingle (architectural) (c) asphalt - shingle (architectural - hq) (d) composite - impact resistance shingle (e) composite - shake (f) composite - tile 2. slate - inclu...</code> |
| <code>Which coverage form is used to insure the personal property of a tenant occupying a single-family dwelling or 1–4 family dwelling?</code> | <code>american commerce insurance company ohio property program rules manual american commerce insurance company page 3 of 39 (04/20) form types ho3: special form- provides "open perils" coverage on the dwelling and other structures and "named perils" coverage on personal property. this policy may be written on an owner- occupied single-family, duplex, triplex, and fourplex dwelling used exclusively for private residential purposes with no more than 1 family per unit. at least one unit of the multi-family dwelling must be occupied by the insured. ho4: contents broad form - provides "named perils" coverage on the personal property of a tenant(s) occupying an apartment, townhouse, condominium, single-family dwelling or one unit in a 1-4 family dwelling used exclusively for private residential purposes with no more than 2 roomers or boarders. ho6: unit owners form - provides "named perils" coverage on building property and personal property for an insured who resides in an owner-occupied single...</code> | <code>american commerce insurance company ohio property program rules manual american commerce insurance company page 4 of 39 (04/20) package policy requirements the following minimum limits apply to each form type. minimum package policy requirements ho3 ho4 ho6 cva base coverage -100% replacement cost n/a 10% of cvc cvb 10% of cva n/a n/a cvc 70% of cva base coverage base coverage cvd 20% of cva 40% of cvc 40% of cvc cvl $100,000 $100,000 $100,000 cvm $1,000 $1,000 $1,000</code> |
| <code>How does the manual define a seasonal dwelling?</code> | <code>safeport insurance company homeowners program manual - south carolina (2020) general rules includes copyrighted material of insurance services office, inc. with its permission page 9 of 36 f. permitted business occupancies certain business occupancies are permitted, pro- vided: 1. the premises is occupied principally for private residential purposes, and 2. there is no other business occupancy on the premises. when the business is conducted on the residence premises, refer to rules 509. and 510. for section i coverage and rules 607. and 608. for section ii cov- erage. when it is conducted from an other resi- dence, only section ii coverage is available. refer to rules 607. and 608. g. farm property a homeowners policy shall not be issued to cover any property to which farm forms or rates apply under the rules of the company, except as noted in following paragraphs 1. and 2.: 1. section i - property - livestock collision coverage may be provided for loss due to colli- sion which results...</code> | <code>safeport insurance company homeowners program manual - south carolina (2020) general rules includes copyrighted material of insurance services office, inc. with its permission page 10 of 36 3. fire resistive exterior walls and floors and roof constructed of masonry or other fire resistive materials. e. mixed (masonry/frame) a combination of both fr ame and masonry construc- tion shall be classed as frame when the exterior walls of frame construction (including gables) exceed 33 1/3% of the total exterior wall area; otherwise class as masonry. rule 108. seasonal dwelling definition a seasonal dwelling is a dwelling with continuous un-oc- cupancy of three or more consecutive months during any one-year period. rule 109. single and separate buildings definition a. single building all buildings or sections of buildings which are acces- sible through unprotected openings shall be consid- ered as a single building. b. separate building 1. buildings which are separated by space shall be consid...</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "TripletLoss",
"matryoshka_dims": [
1024,
512,
256
],
"matryoshka_weights": [
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `num_train_epochs`: 8
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `push_to_hub`: True
- `hub_model_id`: bnkc123/obscura-v1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 8
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: bnkc123/obscura-v1
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_1024_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 |
|:-------:|:------:|:-------------:|:-----------------------:|:----------------------:|:----------------------:|
| **1.0** | **10** | **24.8465** | **0.5265** | **0.5079** | **0.5108** |
| 2.0 | 20 | 16.454 | 0.4701 | 0.4565 | 0.4235 |
| 3.0 | 30 | 9.4107 | 0.3821 | 0.3536 | 0.3599 |
| 4.0 | 40 | 4.786 | 0.3482 | 0.3464 | 0.3413 |
| 5.0 | 50 | 2.675 | 0.3266 | 0.3142 | 0.3150 |
| 6.0 | 60 | 1.542 | 0.3303 | 0.3161 | 0.3052 |
| 7.0 | 70 | 1.1167 | 0.3257 | 0.3131 | 0.3009 |
| 7.2105 | 72 | - | 0.3255 | 0.3131 | 0.3021 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.6
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Mohan-diffuser/w2v-bert-odia-to-eng | Mohan-diffuser | 2025-05-01T04:46:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-30T22:04:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yahyaabd/sbert-bps-custom-tokenizer | yahyaabd | 2025-05-01T04:45:08Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-01T04:26:15Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets:
- Tokenizers: 0.21.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
conquerornigel/conquerornigel | conquerornigel | 2025-05-01T04:43:31Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-05-01T04:43:31Z | ---
license: bsd-3-clause
---
|
paroaarti1/Original.Video.btswiki.com.paro.aarti.viral.video.mms.news | paroaarti1 | 2025-05-01T04:41:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T04:37:24Z | Original.Video.18+.btswiki.com.paro.aarti.viral.video.mms. news
Watch 🟢 ➤ ➤ ➤ <a href="https://socialbrands.cfd/fdghjkbz"> 🌐 (Original.Video.18+.btswiki.com.paro.aarti.viral.video.mms.news)
🔴 ➤►DOWNLOAD👉👉🟢 ➤<a href="https://socialbrands.cfd/fdghjkbz"> 🌐 (Original.Video.18+.btswiki.com.paro.aarti.viral.video.mms.news)
|
sonhask/Llama-3.2-3B-Instruct-bnb-4bit-detect-cookie | sonhask | 2025-05-01T04:40:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T04:40:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
win10/Mistral-rp-24b-karcher-Q6_K-GGUF | win10 | 2025-05-01T04:40:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:mergekit-community/Mistral-rp-24b-karcher",
"base_model:quantized:mergekit-community/Mistral-rp-24b-karcher",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T04:38:35Z | ---
base_model: mergekit-community/Mistral-rp-24b-karcher
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# win10/Mistral-rp-24b-karcher-Q6_K-GGUF
This model was converted to GGUF format from [`mergekit-community/Mistral-rp-24b-karcher`](https://huggingface.co/mergekit-community/Mistral-rp-24b-karcher) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mergekit-community/Mistral-rp-24b-karcher) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo win10/Mistral-rp-24b-karcher-Q6_K-GGUF --hf-file mistral-rp-24b-karcher-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo win10/Mistral-rp-24b-karcher-Q6_K-GGUF --hf-file mistral-rp-24b-karcher-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo win10/Mistral-rp-24b-karcher-Q6_K-GGUF --hf-file mistral-rp-24b-karcher-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo win10/Mistral-rp-24b-karcher-Q6_K-GGUF --hf-file mistral-rp-24b-karcher-q6_k.gguf -c 2048
```
|
ibrazebra/lj-speech-finetuned-styletts2 | ibrazebra | 2025-05-01T04:38:57Z | 0 | 0 | null | [
"text-to-speech",
"tts",
"styletts2",
"ljspeech",
"finetuned",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-04-27T04:38:49Z | ---
license: apache-2.0
tags:
- text-to-speech
- tts
- styletts2
- ljspeech
- finetuned
---
# LJSpeech Finetuned StyleTTS 2
This repository hosts checkpoints of a StyleTTS2 model specifically adapted for high-quality single-speaker speech synthesis using the LJSpeech dataset. StyleTTS2 is a state-of-the-art text-to-speech model known for its expressive and natural-sounding voice synthesis achieved through a style diffusion mechanism.
Our finetuning process began with a robust multispeaker StyleTTS2 model, pretrained by the original authors on the extensive LibriTTS dataset for 20 epochs. This base model provides a strong foundation in learning general speech characteristics. We then specialized this model by finetuning it on the LJSpeech dataset, which comprises approximately 1 hour of speech data (around 1,000 audio samples) from a single speaker. This targeted finetuning for 50 epochs allows the model to capture the unique voice characteristics and nuances of the LJSpeech speaker. The methodology employed here demonstrates a transferable approach: StyleTTS2 can be effectively adapted to generate speech in virtually any voice, provided sufficient audio samples are available for finetuning.
## Checkpoint Details
This repository includes checkpoints from two separate finetuning runs, located in the following subdirectories:
* **`no-slm-discriminator`**: These checkpoints resulted from a finetuning run where the Speech Language Model (WavLM) was intentionally excluded as a discriminator in the style diffusion process. This decision was made due to Out-of-Memory (OOM) errors encountered on a single NVIDIA RTX 3090. Despite this modification, the finetuning proceeded successfully, taking approximately 9 hours, 23 minutes, and 54 seconds on the aforementioned hardware. Checkpoints are available at 5-epoch intervals, ranging from `epoch_2nd_00004.pth` to `epoch_2nd_00049.pth`.
* **`with-slm-discriminator`**: This set of checkpoints comes from a finetuning run that utilized the Speech Language Model (WavLM) as a discriminator, aligning with the default StyleTTS2 configuration. This integration leverages the powerful representations of WavLM to guide the style diffusion process, potentially leading to enhanced speech naturalness. This more computationally intensive run took approximately 2 days and 18 hours to complete on a single NVIDIA RTX 3090. Similar to the other run, checkpoints are provided every 5 epochs, from `epoch_2nd_00004.pth` to `epoch_2nd_00049.pth`.
## Training Details
* **Base Model:** StyleTTS2 (pretrained on LibriTTS for 20 epochs)
* **Finetuning Dataset:** LJSpeech (1 hour subset, ~1k samples)
* **Number of Epochs:** 50
* **Hardware (Run 1 - No SLM):** 1 x NVIDIA RTX 3090
* **Hardware (Run 2 - With SLM):** 1 x NVIDIA RTX 3090
* **Training Time (Run 1):** ~9 hours 24 minutes
* **Training Time (Run 2):** ~2 days 18 hours
## Usage
To leverage these finetuned StyleTTS 2 checkpoints, ensure you have the original StyleTTS2 codebase properly set up. The provided checkpoints can then be loaded using the framework's designated loading mechanisms, often involving configuration files that specify the model architecture and training parameters. Below is a general Python example illustrating how you might load a checkpoint. Remember to adjust the file paths according to your local setup and the specific loading functions provided by the StyleTTS 2 implementation.
```python
import torch
# Example for loading a checkpoint (adjust paths as needed)
checkpoint_path_with_slm = "huggingface_hub::ibrazebra/lj-speech-finetuned-styletts2/with-slm-discriminator/epoch_2nd_00049.pth"
config_path_with_slm = "huggingface_hub::ibrazebra/lj-speech-finetuned-styletts2/with-slm-discriminator/config_ft.yml"
checkpoint_with_slm = torch.hub.load_state_dict_from_url(checkpoint_path_with_slm)
# Load this state dictionary into your StyleTTS 2 model configured with SLM discriminator
``` |
tanvirback97/Brand | tanvirback97 | 2025-05-01T04:36:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T04:36:12Z | ---
license: apache-2.0
---
|
rayonlabs/hf-autotrain-2025-04-30-bc7c41bc | rayonlabs | 2025-05-01T04:34:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:rayonlabs/autotrain-data-hf-autotrain-2025-04-30-bc7c41bc",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-1.5B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T14:14:41Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: Qwen/Qwen2-1.5B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- rayonlabs/autotrain-data-hf-autotrain-2025-04-30-bc7c41bc
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
sapphirehowl/jadeitegolf | sapphirehowl | 2025-05-01T04:33:23Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T04:33:22Z | ---
license: apache-2.0
---
|
jxjessieli/llama-3.1_multi-graph20k_5e-7 | jxjessieli | 2025-05-01T04:31:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T10:45:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Harsh7760/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-prickly_running_toad | Harsh7760 | 2025-05-01T04:28:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am prickly running toad",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T22:07:51Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-prickly_running_toad
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am prickly running toad
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-prickly_running_toad
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Harsh7760/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-prickly_running_toad", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
okoto56981/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_pesty_komodo | okoto56981 | 2025-05-01T04:28:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pale pesty komodo",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T01:47:26Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_pesty_komodo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pale pesty komodo
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_pesty_komodo
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="okoto56981/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_pesty_komodo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vmpsergio/3a74ec34-2324-4cf1-8030-ef4f86298f32 | vmpsergio | 2025-05-01T04:25:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T04:14:00Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3a74ec34-2324-4cf1-8030-ef4f86298f32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 1fe9e6ba7241f0fb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1fe9e6ba7241f0fb_train_data.json
type:
field_instruction: topic
field_output: argument
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/3a74ec34-2324-4cf1-8030-ef4f86298f32
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/1fe9e6ba7241f0fb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f5778bba-aadc-469f-ac54-4a9b43a3cd91
wandb_project: s56-2
wandb_run: your_name
wandb_runid: f5778bba-aadc-469f-ac54-4a9b43a3cd91
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3a74ec34-2324-4cf1-8030-ef4f86298f32
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4926 | 0.0428 | 200 | 2.5224 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
stokemctoke/Tony-Blair_v01_F1D | stokemctoke | 2025-05-01T04:22:22Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-01T04:18:11Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: 70NY8L41R a man playing chess at the park, bomb going off in the background
output:
url: samples/1746073053339__000004600_0.jpg
- text: 70NY8L41R a man holding a coffee cup, in a beanie, sitting at a cafe
output:
url: samples/1746073069320__000004600_1.jpg
- text: 70NY8L41R a man holding a sign that says, 'Stoke LoRA'
output:
url: samples/1746073085306__000004600_2.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 70NY8L41R
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Tony-Blair_v01_F1D
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `70NY8L41R` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/stokemctoke/Tony-Blair_v01_F1D/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('stokemctoke/Tony-Blair_v01_F1D', weight_name='Tony-Blair_v01_F1D.safetensors')
image = pipeline('70NY8L41R a man playing chess at the park, bomb going off in the background').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
CohenQu/Qwen2.5-14B-Instruct_HintGenerator.08.04 | CohenQu | 2025-05-01T04:21:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:CohenQu/HintGenerator.08.04",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T01:00:55Z | ---
base_model: Qwen/Qwen2.5-14B-Instruct
datasets: CohenQu/HintGenerator.08.04
library_name: transformers
model_name: Qwen2.5-14B-Instruct_HintGenerator.08.04
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-14B-Instruct_HintGenerator.08.04
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the [CohenQu/HintGenerator.08.04](https://huggingface.co/datasets/CohenQu/HintGenerator.08.04) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CohenQu/Qwen2.5-14B-Instruct_HintGenerator.08.04", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuxiao98/hint-generator/runs/mvkjqd9g)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.50.2
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
wlgh7407/CAS4133_Assignment1 | wlgh7407 | 2025-05-01T04:18:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T04:13:37Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Irfan12jp/Irfandady | Irfan12jp | 2025-05-01T04:16:42Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T04:16:42Z | ---
license: apache-2.0
---
|
mradermacher/urdu_tts-GGUF | mradermacher | 2025-05-01T04:12:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:AhmadIshaqai/urdu_tts",
"base_model:quantized:AhmadIshaqai/urdu_tts",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T04:05:28Z | ---
base_model: AhmadIshaqai/urdu_tts
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AhmadIshaqai/urdu_tts
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/urdu_tts-GGUF/resolve/main/urdu_tts.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Jebataleye/Rap-LupeOnly1 | Jebataleye | 2025-05-01T04:12:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T04:11:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gachigachi/ccsakura | gachigachi | 2025-05-01T04:09:36Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Liberata/illustrious-xl-v1.0",
"base_model:adapter:Liberata/illustrious-xl-v1.0",
"region:us"
] | text-to-image | 2025-05-01T04:09:14Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: outdoor, trees, grass, cityscape, floating leaves, flowers,
output:
url: images/20250501092735_[waiNSFWIllustrious_v130]_(856x1248).png
base_model: Liberata/illustrious-xl-v1.0
instance_prompt: null
---
# kinomotosakura
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/gachigachi/ccsakura/tree/main) them in the Files & versions tab.
|
raak-16/hinglish_model-ai | raak-16 | 2025-05-01T04:07:58Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T16:07:49Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** raak-16
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MHassan008/my_awesome_wnut_model | MHassan008 | 2025-05-01T04:01:06Z | 0 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-30T15:08:09Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: MHassan008/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MHassan008/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1023
- Validation Loss: 0.2493
- Train Precision: 0.6623
- Train Recall: 0.4833
- Train F1: 0.5588
- Train Accuracy: 0.9497
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1491 | 0.2596 | 0.6851 | 0.4139 | 0.5160 | 0.9459 | 0 |
| 0.1111 | 0.2493 | 0.6623 | 0.4833 | 0.5588 | 0.9497 | 1 |
| 0.1023 | 0.2493 | 0.6623 | 0.4833 | 0.5588 | 0.9497 | 2 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.18.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
eyinlojuoluwa/distilbert-base-uncased-not-usable | eyinlojuoluwa | 2025-05-01T04:01:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-30T20:54:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leodonkikonki/cbnjc | leodonkikonki | 2025-05-01T03:56:17Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-01T03:56:12Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: cbnjc
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# cbnjc
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `cbnjc` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
seoyeong903/react_deepseek_1.5B | seoyeong903 | 2025-05-01T03:51:42Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T04:51:28Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jess-kevin-23/CtrlMindAI | jess-kevin-23 | 2025-05-01T03:50:08Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-01T03:50:08Z | ---
license: creativeml-openrail-m
---
|
sbintuitions/modernbert-ja-30m | sbintuitions | 2025-05-01T03:42:55Z | 1,262 | 5 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"ja",
"en",
"arxiv:2412.13663",
"arxiv:2104.09864",
"arxiv:2404.10830",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-02-19T10:27:20Z | ---
language:
- ja
- en
license: mit
pipeline_tag: fill-mask
library_name: transformers
---
# ModernBERT-Ja-30M
This repository provides Japanese ModernBERT trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
[ModernBERT](https://arxiv.org/abs/2412.13663) is a new variant of the BERT model that combines local and global attention, allowing it to handle long sequences while maintaining high computational efficiency.
It also incorporates modern architectural improvements, such as [RoPE](https://arxiv.org/abs/2104.09864).
Our ModernBERT-Ja-30M is trained on a high-quality corpus of Japanese and English text comprising **4.39T tokens**, featuring a vocabulary size of 102,400 and a sequence length of **8,192** tokens.
## How to Use
You can use our models directly with the transformers library v4.48.0 or higher:
```bash
pip install -U "transformers>=4.48.0"
```
Additionally, if your GPUs support Flash Attention 2, we recommend using our models with Flash Attention 2.
```
pip install flash-attn --no-build-isolation
```
### Example Usage
```python
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
model = AutoModelForMaskedLM.from_pretrained("sbintuitions/modernbert-ja-30m", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/modernbert-ja-30m")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("おはようございます、今日の天気は<mask>です。")
for result in results:
print(result)
# {'score': 0.259765625, 'token': 16416, 'token_str': '晴れ', 'sequence': 'おはようございます、今日の天気は晴れです。'}
# {'score': 0.1669921875, 'token': 28933, 'token_str': '曇り', 'sequence': 'おはようございます、今日の天気は曇りです。'}
# {'score': 0.12255859375, 'token': 52525, 'token_str': '快晴', 'sequence': 'おはようございます、今日の天気は快晴です。'}
# {'score': 0.044921875, 'token': 92339, 'token_str': 'くもり', 'sequence': 'おはようございます、今日の天気はくもりです。'}
# {'score': 0.025634765625, 'token': 2988, 'token_str': '雨', 'sequence': 'おはようございます、今日の天気は雨です。'}
```
## Model Series
We provide ModernBERT-Ja in several model sizes. Below is a summary of each model.
|ID| #Param. | #Param.<br>w/o Emb.|Dim.|Inter. Dim.|#Layers|
|-|-|-|-|-|-|
|[**sbintuitions/modernbert-ja-30m**](https://huggingface.co/sbintuitions/modernbert-ja-30m)|37M|10M|256|1024|10|
|[sbintuitions/modernbert-ja-70m](https://huggingface.co/sbintuitions/modernbert-ja-70m)|70M|31M|384|1536|13|
|[sbintuitions/modernbert-ja-130m](https://huggingface.co/sbintuitions/modernbert-ja-130m)|132M|80M|512|2048|19|
|[sbintuitions/modernbert-ja-310m](https://huggingface.co/sbintuitions/modernbert-ja-310m)|315M|236M|768|3072|25|
For all models,
the vocabulary size is 102,400,
the head dimension is 64,
and the activation function is GELU.
The configuration for global attention and sliding window attention consists of 1 layer + 2 layers (global–local–local).
The sliding window attention window context size is 128, with global_rope_theta set to 160,000 and local_rope_theta set to 10,000.
## Model Description
We constructed the ModernBERT-Ja-30M model through a three-stage training process, which follows the original [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base).
First, we performed pre-training using a large corpus.
Next, we conducted two phases of context length extension.
1. **Pre-training**
- Training with **3.51T tokens**, including Japanese and English data extracted from web corpora.
- The sequence length is 1,024 with naive sequence packing.
- Masking rate is **30%** (with 80-10-10 rule).
2. **Context Extension (CE): Phase 1**
- Training with **430B tokens**, comprising high-quality Japanese and English data.
- The sequence length is **8,192** with [best-fit packing](https://arxiv.org/abs/2404.10830).
- Masking rate is **30%** (with 80-10-10 rule).
3. **Context Extension (CE): Phase 2**
- Training with **450B tokens**, including 150B tokens of high-quality Japanese data, over 3 epochs.
- The sequence length is **8,192** without sequence packing.
- Masking rate is **15%** (with 80-10-10 rule).
The key differences from the original ModernBERT are:
1. It is pre-trained on Japanese and English corpora, leading to a total of approximately 4.39T training tokens.
2. We observed that decreasing the mask rate in Context Extension Phase 2 from 30% to 15% improved the model's performance.
### Tokenization and Vocabulary
We use the tokenizer and vocabulary from [sbintuitions/sarashina2-13b](https://huggingface.co/collections/sbintuitions/sarashina-6680c6d6ab37b94428ca83fb).
Specifically, we employ a [SentencePiece](https://github.com/google/sentencepiece) tokenizer with a unigram language model and byte fallback.
We do not apply pre-tokenization using a Japanese tokenizer.
Therefore, users can directly input raw sentences into the tokenizer without any additional preprocessing.
### Intended Uses and Limitations
You can use this model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is not designed for text generation.
When you want to generate a text, please use a text generation model such as [Sarashina](https://huggingface.co/collections/sbintuitions/sarashina-6680c6d6ab37b94428ca83fb).
Since the unigram language model is used as a tokenizer, the token boundaries often do not align with the morpheme boundaries, resulting in poor performance in token classification tasks such as named entity recognition and span extraction.
## Evaluation
We evaluated our model on 12 datasets, including JGLUE, across various tasks:
- Knowledge-based tasks: [JCommonsenseQA (JComQA)](https://github.com/yahoojapan/JGLUE), [RCQA](https://www.cl.ecei.tohoku.ac.jp/rcqa/)
- Japanese linguistic acceptability classification: [JCoLA](https://github.com/osekilab/JCoLA)
- Natural Language Inference (NLI) tasks: [JNLI](https://github.com/yahoojapan/JGLUE), [JSICK](https://github.com/verypluming/JSICK), [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [Kyoto University RTE (KU RTE)](https://nlp.ist.i.kyoto-u.ac.jp/index.php?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)
- Semantic Textual Similarity (STS) task: [JSTS](https://github.com/yahoojapan/JGLUE)
- Various classification tasks: [Livedoor news corpus (Livedoor)](https://www.rondhuit.com/download.html), [LLM-jp Toxicity (Toxicity)](https://llm-jp.nii.ac.jp/llm/2024/08/07/llm-jp-toxicity-dataset.html), [MARC-ja](https://github.com/yahoojapan/JGLUE), [WRIME v2 (WRIME)](https://github.com/ids-cv/wrime)
These tasks are short-sequence evaluation tasks, and we aligned our settings with those of existing models.
While the maximum sequence length varies across tasks, it does not exceed 512.
We set the sequence length and other experimental configurations per task, ensuring that the settings remain consistent across models.
For hyperparameters, we explored the following ranges:
- Learning rate: `{5e-6, 1e-5, 2e-5, 3e-5, 5e-5, 1e-4}`
- Number of epochs:
- Tasks with a large number of instances: `{1, 2}`
- Tasks with fewer instances: `{3, 5, 10}`
In the experiments, we loaded several Japanese models that are publicly available on HuggingFace using `AutoModel` and constructed classification models by appending a classification head consisting of a linear layer, a GELU activation function, and another linear layer.
This was done because HuggingFace's `AutoModelForSequenceClassification` comes with different implementations for each model, and using them directly would result in classification heads that differ from one model to another.
For the embeddings fed into the classification layer, we used the embedding of the special token at the beginning of the sentence.
That is, `[CLS]` in BERT and `<s>` in RoBERTa.
Note that our model does not perform the next sentence prediction (NSP) task during pretraining, so `<s>` is added at the beginning of the sentence, not `<cls>`.
Therefore, we used the `<s>` token for classification.
We conducted evaluations using 5-fold cross-validation.
That is, we trained the model on the `train` set and evaluated it on the `validation` set.
After determining the optimal hyperparameters (learning rate, epochs) based on the average performance on the `validation` sets, we report the average performance on the `test` sets with the hyperparameters.
For datasets without predefined splits, we first set aside 10% of the data as the test set and then performed 5-fold cross-validation on the remaining data.
For datasets such as some tasks in **JGLUE**, where only `train` and `validation` sets are publicly available,
we treated the `validation` set as the `test` set and performed 5-fold cross-validation on the remaining data.
For datasets with predefined `train`, `validation`, and `test` sets, we simply trained and evaluated the model five times with different random seeds and used the model with the best average evaluation score on the `validation` set to measure the final score on the `test` set.
### Evaluation Results
| Model | #Param. | #Param.<br>w/o Emb. | **Avg.** | [JComQA](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [RCQA](https://www.cl.ecei.tohoku.ac.jp/rcqa/)<br>(Acc.) | [JCoLA](https://github.com/osekilab/JCoLA)<br>(Acc.) | [JNLI](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [JSICK](https://github.com/verypluming/JSICK)<br>(Acc.) | [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)<br>(Acc.) | [KU RTE](https://nlp.ist.i.kyoto-u.ac.jp/index.php?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)<br>(Acc.) | [JSTS](https://github.com/yahoojapan/JGLUE)<br>(Spearman's ρ) | [Livedoor](https://www.rondhuit.com/download.html)<br>(Acc.) | [Toxicity](https://llm-jp.nii.ac.jp/llm/2024/08/07/llm-jp-toxicity-dataset.html)<br>(Acc.) | [MARC-ja](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [WRIME](https://github.com/ids-cv/wrime)<br>(Acc.) |
| ------ | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| [**ModernBERT-Ja-30M**](https://huggingface.co/sbintuitions/modernbert-ja-30m)<br>(this model) | 37M | 10M | <u>85.67</u> | 80.95 | 82.35 | 78.85 | 88.69 | 84.39 | 91.79 | 61.13 | 85.94 | 97.20 | 89.33 | 95.87 | 91.61 |
| [ModernBERT-Ja-70M](https://huggingface.co/sbintuitions/modernbert-ja-70m) | 70M | 31M | 86.77 | 85.65 | 83.51 | 80.26 | 90.33 | 85.01 | 92.73 | 60.08 | 87.59 | 96.34 | 91.01 | 96.13 | 92.59 |
| [ModernBERT-Ja-130M](https://huggingface.co/sbintuitions/modernbert-ja-130m) | 132M | 80M | 88.95 | 91.01 | 85.28 | 84.18 | 92.03 | 86.61 | 94.01 | 65.56 | 89.20 | 97.42 | 91.57 | 96.48 | 93.99 |
| [ModernBERT-Ja-310M](https://huggingface.co/sbintuitions/modernbert-ja-310m) | 315M | 236M | 89.83 | 93.53 | 86.18 | 84.81 | 92.93 | 86.87 | 94.48 | 68.79 | 90.53 | 96.99 | 91.24 | 96.39 | 95.23 |
| | | | | | | | | | | | | | | | |
| [LINE DistillBERT](https://huggingface.co/line-corporation/line-distilbert-base-japanese)| 68M | 43M | 85.32 | 76.39 | 82.17 | 81.04 | 87.49 | 83.66 | 91.42 | 60.24 | 84.57 | 97.26 | 91.46 | 95.91 | 92.16 |
| [Tohoku BERT-base v3](https://huggingface.co/tohoku-nlp/bert-base-japanese-v3)| 111M | 86M | 86.74 | 82.82 | 83.65 | 81.50 | 89.68 | 84.96 | 92.32 | 60.56 | 87.31 | 96.91 | 93.15 | 96.13 | 91.91 |
| [LUKE-japanese-base-lite](https://huggingface.co/studio-ousia/luke-japanese-base-lite)| 133M | 107M | 87.15 | 82.95 | 83.53 | 82.39 | 90.36 | 85.26 | 92.78 | 60.89 | 86.68 | 97.12 | 93.48 | 96.30 | 94.05 |
| [Kyoto DeBERTa-v3](https://huggingface.co/ku-nlp/deberta-v3-base-japanese)| 160M | 86M | 88.31 | 87.44 | 84.90 | 84.35 | 91.91 | 86.22 | 93.41 | 63.31 | 88.51 | 97.10 | 92.58 | 96.32 | 93.64 |
| | | | | | | | | | | | | | | | |
| [KoichiYasuoka/modernbert-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia)| 160M | 110M | 82.41 | 62.59 | 81.19 | 76.80 | 84.11 | 82.01 | 90.51 | 60.48 | 81.74 | 97.10 | 90.34 | 94.85 | 87.25 |
| [llm-jp/llm-jp-modernbert-base](https://huggingface.co/llm-jp/llm-jp-modernbert-base)| 187M | 110M | 86.75 | 84.29 | 83.99 | 78.00 | 90.28 | 83.76 | 93.40 | 60.32 | 87.71 | 96.64 | 92.13 | 96.33 | 94.09 |
| | | | | | | | | | | | | | | | |
| [Tohoku BERT-large char v2](https://huggingface.co/cl-tohoku/bert-large-japanese-char-v2)| 311M | 303M | 87.23 | 85.08 | 84.20 | 81.79 | 90.55 | 85.25 | 92.63 | 61.29 | 87.64 | 96.55 | 93.26 | 96.25 | 92.29 |
| [Tohoku BERT-large v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)| 337M | 303M | 88.36 | 86.93 | 84.81 | 82.89 | 92.05 | 85.33 | 93.32 | 64.60 | 89.11 | 97.64 | 94.38 | 96.46 | 92.77 |
| [Waseda RoBERTa-large (Seq. 512)](https://huggingface.co/nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp)| 337M | 303M | 88.37 | 88.81 | 84.50 | 82.34 | 91.37 | 85.49 | 93.97 | 61.53 | 88.95 | 96.99 | 95.06 | 96.38 | 95.09 |
| [Waseda RoBERTa-large (Seq. 128)](https://huggingface.co/nlp-waseda/roberta-large-japanese-with-auto-jumanpp)| 337M | 303M | 88.36 | 89.35 | 83.63 | 84.26 | 91.53 | 85.30 | 94.05 | 62.82 | 88.67 | 95.82 | 93.60 | 96.05 | 95.23 |
| [LUKE-japanese-large-lite](https://huggingface.co/studio-ousia/luke-japanese-large-lite)| 414M | 379M | 88.94 | 88.01 | 84.84 | 84.34 | 92.37 | 86.14 | 94.32 | 64.68 | 89.30 | 97.53 | 93.71 | 96.49 | 95.59 |
| [RetrievaBERT](https://huggingface.co/retrieva-jp/bert-1.3b)| 1.30B | 1.15B | 86.79 | 80.55 | 84.35 | 80.67 | 89.86 | 85.24 | 93.46 | 60.48 | 87.30 | 97.04 | 92.70 | 96.18 | 93.61 |
| | | | | | | | | | | | | | | | |
| [hotchpotch/mMiniLMv2-L6-H384](https://huggingface.co/hotchpotch/mMiniLMv2-L6-H384)| 107M | 11M | 81.53 | 60.34 | 82.83 | 78.61 | 86.24 | 77.94 | 87.32 | 60.48 | 80.48 | 95.55 | 86.40 | 94.97 | 87.20 |
| [hotchpotch/mMiniLMv2-L12-H384](https://huggingface.co/hotchpotch/mMiniLMv2-L12-H384)| 118M | 21M | 82.59 | 62.70 | 83.77 | 78.61 | 87.69 | 79.58 | 87.65 | 60.48 | 81.55 | 95.88 | 90.00 | 94.89 | 88.28 |
| [mBERT](https://huggingface.co/google-bert/bert-base-multilingual-cased)| 178M | 86M | 83.48 | 66.08 | 82.76 | 77.32 | 88.15 | 84.20 | 91.25 | 60.56 | 84.18 | 97.01 | 89.21 | 95.05 | 85.99 |
| [XLM-RoBERTa-base](https://huggingface.co/FacebookAI/xlm-roberta-base)| 278M | 86M | 84.36 | 69.44 | 82.86 | 78.71 | 88.14 | 83.17 | 91.27 | 60.48 | 83.34 | 95.93 | 91.91 | 95.82 | 91.20 |
| [XLM-RoBERTa-large](https://huggingface.co/FacebookAI/xlm-roberta-large)| 560M | 303M | 86.95 | 80.07 | 84.47 | 80.42 | 92.16 | 84.74 | 93.87 | 60.48 | 88.03 | 97.01 | 93.37 | 96.03 | 92.72 |
The evaluation results are shown in the table.
`#Param.` represents the number of parameters in both the input embedding layer and the Transformer layers, while `#Param. w/o Emb.` indicates the number of parameters in the Transformer layers only.
Despite being a long-context model capable of processing sequences of up to 8,192 tokens, our ModernBERT-Ja-30M also exhibited strong performance in short-sequence evaluations.
## Ethical Considerations
ModernBERT-Ja-30M may produce representations that reflect biases.
When you use this model for masked language modeling, it may generate biases or harmful expressions.
## License
[MIT License](https://huggingface.co/sbintuitions/modernbert-ja-30m/blob/main/LICENSE)
## Citation
```bibtex
@misc{
modernbert-ja,
author = {Tsukagoshi, Hayato and Li, Shengzhe and Fukuchi, Akihiko and Shibata, Tomohide},
title = {{ModernBERT-Ja}},
howpublished = {\url{https://huggingface.co/collections/sbintuitions/modernbert-ja-67b68fe891132877cf67aa0a}},
url = {https://huggingface.co/collections/sbintuitions/modernbert-ja-67b68fe891132877cf67aa0a},
year = {2025},
}
```
|
sbintuitions/modernbert-ja-70m | sbintuitions | 2025-05-01T03:42:41Z | 431 | 5 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"ja",
"en",
"arxiv:2412.13663",
"arxiv:2104.09864",
"arxiv:2404.10830",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-02-19T10:26:31Z | ---
language:
- ja
- en
license: mit
pipeline_tag: fill-mask
library_name: transformers
---
# ModernBERT-Ja-70M
This repository provides Japanese ModernBERT trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
[ModernBERT](https://arxiv.org/abs/2412.13663) is a new variant of the BERT model that combines local and global attention, allowing it to handle long sequences while maintaining high computational efficiency.
It also incorporates modern architectural improvements, such as [RoPE](https://arxiv.org/abs/2104.09864).
Our ModernBERT-Ja-70M is trained on a high-quality corpus of Japanese and English text comprising **4.39T tokens**, featuring a vocabulary size of 102,400 and a sequence length of **8,192** tokens.
## How to Use
You can use our models directly with the transformers library v4.48.0 or higher:
```bash
pip install -U "transformers>=4.48.0"
```
Additionally, if your GPUs support Flash Attention 2, we recommend using our models with Flash Attention 2.
```
pip install flash-attn --no-build-isolation
```
### Example Usage
```python
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
model = AutoModelForMaskedLM.from_pretrained("sbintuitions/modernbert-ja-70m", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/modernbert-ja-70m")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("おはようございます、今日の天気は<mask>です。")
for result in results:
print(result)
# {'score': 0.40625, 'token': 16416, 'token_str': '晴れ', 'sequence': 'おはようございます、今日の天気は晴れです。'}
# {'score': 0.2041015625, 'token': 28933, 'token_str': '曇り', 'sequence': 'おはようございます、今日の天気は曇りです。'}
# {'score': 0.080078125, 'token': 2988, 'token_str': '雨', 'sequence': 'おはようございます、今日の天気は雨です。'}
# {'score': 0.07080078125, 'token': 52525, 'token_str': '快晴', 'sequence': 'おはようございます、今日の天気は快晴です。'}
# {'score': 0.037841796875, 'token': 92339, 'token_str': 'くもり', 'sequence': 'おはようございます、今日の天気はくもりです。'}
```
## Model Series
We provide ModernBERT-Ja in several model sizes. Below is a summary of each model.
|ID| #Param. | #Param.<br>w/o Emb.|Dim.|Inter. Dim.|#Layers|
|-|-|-|-|-|-|
|[sbintuitions/modernbert-ja-30m](https://huggingface.co/sbintuitions/modernbert-ja-30m)|37M|10M|256|1024|10|
|[**sbintuitions/modernbert-ja-70m**](https://huggingface.co/sbintuitions/modernbert-ja-70m)|70M|31M|384|1536|13|
|[sbintuitions/modernbert-ja-130m](https://huggingface.co/sbintuitions/modernbert-ja-130m)|132M|80M|512|2048|19|
|[sbintuitions/modernbert-ja-310m](https://huggingface.co/sbintuitions/modernbert-ja-310m)|315M|236M|768|3072|25|
For all models,
the vocabulary size is 102,400,
the head dimension is 64,
and the activation function is GELU.
The configuration for global attention and sliding window attention consists of 1 layer + 2 layers (global–local–local).
The sliding window attention window context size is 128, with global_rope_theta set to 160,000 and local_rope_theta set to 10,000.
## Model Description
We constructed the ModernBERT-Ja-70M model through a three-stage training process, which follows the original [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base).
First, we performed pre-training using a large corpus.
Next, we conducted two phases of context length extension.
1. **Pre-training**
- Training with **3.51T tokens**, including Japanese and English data extracted from web corpora.
- The sequence length is 1,024 with naive sequence packing.
- Masking rate is **30%** (with 80-10-10 rule).
2. **Context Extension (CE): Phase 1**
- Training with **430B tokens**, comprising high-quality Japanese and English data.
- The sequence length is **8,192** with [best-fit packing](https://arxiv.org/abs/2404.10830).
- Masking rate is **30%** (with 80-10-10 rule).
3. **Context Extension (CE): Phase 2**
- Training with **450B tokens**, including 150B tokens of high-quality Japanese data, over 3 epochs.
- The sequence length is **8,192** without sequence packing.
- Masking rate is **15%** (with 80-10-10 rule).
The key differences from the original ModernBERT are:
1. It is pre-trained on Japanese and English corpora, leading to a total of approximately 4.39T training tokens.
2. We observed that decreasing the mask rate in Context Extension Phase 2 from 30% to 15% improved the model's performance.
### Tokenization and Vocabulary
We use the tokenizer and vocabulary from [sbintuitions/sarashina2-13b](https://huggingface.co/collections/sbintuitions/sarashina-6680c6d6ab37b94428ca83fb).
Specifically, we employ a [SentencePiece](https://github.com/google/sentencepiece) tokenizer with a unigram language model and byte fallback.
We do not apply pre-tokenization using a Japanese tokenizer.
Therefore, users can directly input raw sentences into the tokenizer without any additional preprocessing.
### Intended Uses and Limitations
You can use this model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is not designed for text generation.
When you want to generate a text, please use a text generation model such as [Sarashina](https://huggingface.co/collections/sbintuitions/sarashina-6680c6d6ab37b94428ca83fb).
Since the unigram language model is used as a tokenizer, the token boundaries often do not align with the morpheme boundaries, resulting in poor performance in token classification tasks such as named entity recognition and span extraction.
## Evaluation
We evaluated our model on 12 datasets, including JGLUE, across various tasks:
- Knowledge-based tasks: [JCommonsenseQA (JComQA)](https://github.com/yahoojapan/JGLUE), [RCQA](https://www.cl.ecei.tohoku.ac.jp/rcqa/)
- Japanese linguistic acceptability classification: [JCoLA](https://github.com/osekilab/JCoLA)
- Natural Language Inference (NLI) tasks: [JNLI](https://github.com/yahoojapan/JGLUE), [JSICK](https://github.com/verypluming/JSICK), [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [Kyoto University RTE (KU RTE)](https://nlp.ist.i.kyoto-u.ac.jp/index.php?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)
- Semantic Textual Similarity (STS) task: [JSTS](https://github.com/yahoojapan/JGLUE)
- Various classification tasks: [Livedoor news corpus (Livedoor)](https://www.rondhuit.com/download.html), [LLM-jp Toxicity (Toxicity)](https://llm-jp.nii.ac.jp/llm/2024/08/07/llm-jp-toxicity-dataset.html), [MARC-ja](https://github.com/yahoojapan/JGLUE), [WRIME v2 (WRIME)](https://github.com/ids-cv/wrime)
These tasks are short-sequence evaluation tasks, and we aligned our settings with those of existing models.
While the maximum sequence length varies across tasks, it does not exceed 512.
We set the sequence length and other experimental configurations per task, ensuring that the settings remain consistent across models.
For hyperparameters, we explored the following ranges:
- Learning rate: `{5e-6, 1e-5, 2e-5, 3e-5, 5e-5, 1e-4}`
- Number of epochs:
- Tasks with a large number of instances: `{1, 2}`
- Tasks with fewer instances: `{3, 5, 10}`
In the experiments, we loaded several Japanese models that are publicly available on HuggingFace using `AutoModel` and constructed classification models by appending a classification head consisting of a linear layer, a GELU activation function, and another linear layer.
This was done because HuggingFace's `AutoModelForSequenceClassification` comes with different implementations for each model, and using them directly would result in classification heads that differ from one model to another.
For the embeddings fed into the classification layer, we used the embedding of the special token at the beginning of the sentence.
That is, `[CLS]` in BERT and `<s>` in RoBERTa.
Note that our model does not perform the next sentence prediction (NSP) task during pretraining, so `<s>` is added at the beginning of the sentence, not `<cls>`.
Therefore, we used the `<s>` token for classification.
We conducted evaluations using 5-fold cross-validation.
That is, we trained the model on the `train` set and evaluated it on the `validation` set.
After determining the optimal hyperparameters (learning rate, epochs) based on the average performance on the `validation` sets, we report the average performance on the `test` sets with the hyperparameters.
For datasets without predefined splits, we first set aside 10% of the data as the test set and then performed 5-fold cross-validation on the remaining data.
For datasets such as some tasks in **JGLUE**, where only `train` and `validation` sets are publicly available,
we treated the `validation` set as the `test` set and performed 5-fold cross-validation on the remaining data.
For datasets with predefined `train`, `validation`, and `test` sets, we simply trained and evaluated the model five times with different random seeds and used the model with the best average evaluation score on the `validation` set to measure the final score on the `test` set.
### Evaluation Results
| Model | #Param. | #Param.<br>w/o Emb. | **Avg.** | [JComQA](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [RCQA](https://www.cl.ecei.tohoku.ac.jp/rcqa/)<br>(Acc.) | [JCoLA](https://github.com/osekilab/JCoLA)<br>(Acc.) | [JNLI](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [JSICK](https://github.com/verypluming/JSICK)<br>(Acc.) | [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)<br>(Acc.) | [KU RTE](https://nlp.ist.i.kyoto-u.ac.jp/index.php?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)<br>(Acc.) | [JSTS](https://github.com/yahoojapan/JGLUE)<br>(Spearman's ρ) | [Livedoor](https://www.rondhuit.com/download.html)<br>(Acc.) | [Toxicity](https://llm-jp.nii.ac.jp/llm/2024/08/07/llm-jp-toxicity-dataset.html)<br>(Acc.) | [MARC-ja](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [WRIME](https://github.com/ids-cv/wrime)<br>(Acc.) |
| ------ | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| [ModernBERT-Ja-30M](https://huggingface.co/sbintuitions/modernbert-ja-30m) | 37M | 10M | 85.67 | 80.95 | 82.35 | 78.85 | 88.69 | 84.39 | 91.79 | 61.13 | 85.94 | 97.20 | 89.33 | 95.87 | 91.61 |
| [**ModernBERT-Ja-70M**](https://huggingface.co/sbintuitions/modernbert-ja-70m)<br>(this model) | 70M | 31M | <u>86.77</u> | 85.65 | 83.51 | 80.26 | 90.33 | 85.01 | 92.73 | 60.08 | 87.59 | 96.34 | 91.01 | 96.13 | 92.59 |
| [ModernBERT-Ja-130M](https://huggingface.co/sbintuitions/modernbert-ja-130m) | 132M | 80M | 88.95 | 91.01 | 85.28 | 84.18 | 92.03 | 86.61 | 94.01 | 65.56 | 89.20 | 97.42 | 91.57 | 96.48 | 93.99 |
| [ModernBERT-Ja-310M](https://huggingface.co/sbintuitions/modernbert-ja-310m) | 315M | 236M | 89.83 | 93.53 | 86.18 | 84.81 | 92.93 | 86.87 | 94.48 | 68.79 | 90.53 | 96.99 | 91.24 | 96.39 | 95.23 |
| | | | | | | | | | | | | | | | |
| [LINE DistillBERT](https://huggingface.co/line-corporation/line-distilbert-base-japanese)| 68M | 43M | 85.32 | 76.39 | 82.17 | 81.04 | 87.49 | 83.66 | 91.42 | 60.24 | 84.57 | 97.26 | 91.46 | 95.91 | 92.16 |
| [Tohoku BERT-base v3](https://huggingface.co/tohoku-nlp/bert-base-japanese-v3)| 111M | 86M | 86.74 | 82.82 | 83.65 | 81.50 | 89.68 | 84.96 | 92.32 | 60.56 | 87.31 | 96.91 | 93.15 | 96.13 | 91.91 |
| [LUKE-japanese-base-lite](https://huggingface.co/studio-ousia/luke-japanese-base-lite)| 133M | 107M | 87.15 | 82.95 | 83.53 | 82.39 | 90.36 | 85.26 | 92.78 | 60.89 | 86.68 | 97.12 | 93.48 | 96.30 | 94.05 |
| [Kyoto DeBERTa-v3](https://huggingface.co/ku-nlp/deberta-v3-base-japanese)| 160M | 86M | 88.31 | 87.44 | 84.90 | 84.35 | 91.91 | 86.22 | 93.41 | 63.31 | 88.51 | 97.10 | 92.58 | 96.32 | 93.64 |
| | | | | | | | | | | | | | | | |
| [KoichiYasuoka/modernbert-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia)| 160M | 110M | 82.41 | 62.59 | 81.19 | 76.80 | 84.11 | 82.01 | 90.51 | 60.48 | 81.74 | 97.10 | 90.34 | 94.85 | 87.25 |
| [llm-jp/llm-jp-modernbert-base](https://huggingface.co/llm-jp/llm-jp-modernbert-base)| 187M | 110M | 86.75 | 84.29 | 83.99 | 78.00 | 90.28 | 83.76 | 93.40 | 60.32 | 87.71 | 96.64 | 92.13 | 96.33 | 94.09 |
| | | | | | | | | | | | | | | | |
| [Tohoku BERT-large char v2](https://huggingface.co/cl-tohoku/bert-large-japanese-char-v2)| 311M | 303M | 87.23 | 85.08 | 84.20 | 81.79 | 90.55 | 85.25 | 92.63 | 61.29 | 87.64 | 96.55 | 93.26 | 96.25 | 92.29 |
| [Tohoku BERT-large v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)| 337M | 303M | 88.36 | 86.93 | 84.81 | 82.89 | 92.05 | 85.33 | 93.32 | 64.60 | 89.11 | 97.64 | 94.38 | 96.46 | 92.77 |
| [Waseda RoBERTa-large (Seq. 512)](https://huggingface.co/nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp)| 337M | 303M | 88.37 | 88.81 | 84.50 | 82.34 | 91.37 | 85.49 | 93.97 | 61.53 | 88.95 | 96.99 | 95.06 | 96.38 | 95.09 |
| [Waseda RoBERTa-large (Seq. 128)](https://huggingface.co/nlp-waseda/roberta-large-japanese-with-auto-jumanpp)| 337M | 303M | 88.36 | 89.35 | 83.63 | 84.26 | 91.53 | 85.30 | 94.05 | 62.82 | 88.67 | 95.82 | 93.60 | 96.05 | 95.23 |
| [LUKE-japanese-large-lite](https://huggingface.co/studio-ousia/luke-japanese-large-lite)| 414M | 379M | 88.94 | 88.01 | 84.84 | 84.34 | 92.37 | 86.14 | 94.32 | 64.68 | 89.30 | 97.53 | 93.71 | 96.49 | 95.59 |
| [RetrievaBERT](https://huggingface.co/retrieva-jp/bert-1.3b)| 1.30B | 1.15B | 86.79 | 80.55 | 84.35 | 80.67 | 89.86 | 85.24 | 93.46 | 60.48 | 87.30 | 97.04 | 92.70 | 96.18 | 93.61 |
| | | | | | | | | | | | | | | | |
| [hotchpotch/mMiniLMv2-L6-H384](https://huggingface.co/hotchpotch/mMiniLMv2-L6-H384)| 107M | 11M | 81.53 | 60.34 | 82.83 | 78.61 | 86.24 | 77.94 | 87.32 | 60.48 | 80.48 | 95.55 | 86.40 | 94.97 | 87.20 |
| [hotchpotch/mMiniLMv2-L12-H384](https://huggingface.co/hotchpotch/mMiniLMv2-L12-H384)| 118M | 21M | 82.59 | 62.70 | 83.77 | 78.61 | 87.69 | 79.58 | 87.65 | 60.48 | 81.55 | 95.88 | 90.00 | 94.89 | 88.28 |
| [mBERT](https://huggingface.co/google-bert/bert-base-multilingual-cased)| 178M | 86M | 83.48 | 66.08 | 82.76 | 77.32 | 88.15 | 84.20 | 91.25 | 60.56 | 84.18 | 97.01 | 89.21 | 95.05 | 85.99 |
| [XLM-RoBERTa-base](https://huggingface.co/FacebookAI/xlm-roberta-base)| 278M | 86M | 84.36 | 69.44 | 82.86 | 78.71 | 88.14 | 83.17 | 91.27 | 60.48 | 83.34 | 95.93 | 91.91 | 95.82 | 91.20 |
| [XLM-RoBERTa-large](https://huggingface.co/FacebookAI/xlm-roberta-large)| 560M | 303M | 86.95 | 80.07 | 84.47 | 80.42 | 92.16 | 84.74 | 93.87 | 60.48 | 88.03 | 97.01 | 93.37 | 96.03 | 92.72 |
The evaluation results are shown in the table.
`#Param.` represents the number of parameters in both the input embedding layer and the Transformer layers, while `#Param. w/o Emb.` indicates the number of parameters in the Transformer layers only.
Despite being a long-context model capable of processing sequences of up to 8,192 tokens, our ModernBERT-Ja-70M also exhibited strong performance in short-sequence evaluations.
## Ethical Considerations
ModernBERT-Ja-70M may produce representations that reflect biases.
When you use this model for masked language modeling, it may generate biases or harmful expressions.
## License
[MIT License](https://huggingface.co/sbintuitions/modernbert-ja-70m/blob/main/LICENSE)
## Citation
```bibtex
@misc{
modernbert-ja,
author = {Tsukagoshi, Hayato and Li, Shengzhe and Fukuchi, Akihiko and Shibata, Tomohide},
title = {{ModernBERT-Ja}},
howpublished = {\url{https://huggingface.co/collections/sbintuitions/modernbert-ja-67b68fe891132877cf67aa0a}},
url = {https://huggingface.co/collections/sbintuitions/modernbert-ja-67b68fe891132877cf67aa0a},
year = {2025},
}
``` |
soob3123/amoral-qwen3-8B-Q4_K_M-GGUF | soob3123 | 2025-05-01T03:42:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:soob3123/amoral-qwen3-8B",
"base_model:quantized:soob3123/amoral-qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T03:41:45Z | ---
base_model: soob3123/amoral-qwen3-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- llama-cpp
- gguf-my-repo
---
# soob3123/amoral-qwen3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`soob3123/amoral-qwen3-8B`](https://huggingface.co/soob3123/amoral-qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/soob3123/amoral-qwen3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo soob3123/amoral-qwen3-8B-Q4_K_M-GGUF --hf-file amoral-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo soob3123/amoral-qwen3-8B-Q4_K_M-GGUF --hf-file amoral-qwen3-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo soob3123/amoral-qwen3-8B-Q4_K_M-GGUF --hf-file amoral-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo soob3123/amoral-qwen3-8B-Q4_K_M-GGUF --hf-file amoral-qwen3-8b-q4_k_m.gguf -c 2048
```
|
nathan-assis/Llama-3.2-3B-Instruct-Q4_K_M-GGUF | nathan-assis | 2025-05-01T03:38:54Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-04-29T23:30:59Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# nathan-assis/Llama-3.2-3B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nathan-assis/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nathan-assis/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nathan-assis/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nathan-assis/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m-imat.gguf -c 2048
```
|
Kanda-Gangu-Chettri-7-2-Nepali-Video-link/Video.link.Gangu.Chettri.Kanda.7.2.minute.Videos.oficial | Kanda-Gangu-Chettri-7-2-Nepali-Video-link | 2025-05-01T03:33:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T03:33:23Z | ---
license: apache-2.0
---
<a data-target="animated-image.originalLink" rel="nofollow" href="https://t.co/RqB7gZez8s"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
bartowski/microsoft_Phi-4-reasoning-plus-GGUF | bartowski | 2025-05-01T03:31:51Z | 0 | 2 | null | [
"gguf",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"reasoning",
"text-generation",
"en",
"base_model:microsoft/Phi-4-reasoning-plus",
"base_model:quantized:microsoft/Phi-4-reasoning-plus",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T01:05:36Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: What is the derivative of x^2?
license: mit
base_model_relation: quantized
license_link: https://huggingface.co/microsoft/Phi-4-reasoning-plus/resolve/main/LICENSE
language:
- en
base_model: microsoft/Phi-4-reasoning-plus
inference:
parameters:
temperature: 0
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- reasoning
---
## Llamacpp imatrix Quantizations of Phi-4-reasoning-plus by microsoft
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5228">b5228</a> for quantization.
Original model: https://huggingface.co/microsoft/Phi-4-reasoning-plus
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system<|im_sep|>You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format:<think>{Thought section}</think>{Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:<|im_end|>{system_prompt}<|end|><|user|>{prompt}<|end|><|assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Phi-4-reasoning-plus-bf16.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-bf16.gguf) | bf16 | 29.32GB | false | Full BF16 weights. |
| [Phi-4-reasoning-plus-Q8_0.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q8_0.gguf) | Q8_0 | 15.58GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Phi-4-reasoning-plus-Q6_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q6_K_L.gguf) | Q6_K_L | 12.28GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Phi-4-reasoning-plus-Q6_K.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q6_K.gguf) | Q6_K | 12.03GB | false | Very high quality, near perfect, *recommended*. |
| [Phi-4-reasoning-plus-Q5_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q5_K_L.gguf) | Q5_K_L | 10.92GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Phi-4-reasoning-plus-Q5_K_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q5_K_M.gguf) | Q5_K_M | 10.60GB | false | High quality, *recommended*. |
| [Phi-4-reasoning-plus-Q5_K_S.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q5_K_S.gguf) | Q5_K_S | 10.15GB | false | High quality, *recommended*. |
| [Phi-4-reasoning-plus-Q4_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q4_K_L.gguf) | Q4_K_L | 9.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Phi-4-reasoning-plus-Q4_1.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q4_1.gguf) | Q4_1 | 9.27GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Phi-4-reasoning-plus-Q4_K_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q4_K_M.gguf) | Q4_K_M | 9.05GB | false | Good quality, default size for most use cases, *recommended*. |
| [Phi-4-reasoning-plus-Q4_K_S.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q4_K_S.gguf) | Q4_K_S | 8.44GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Phi-4-reasoning-plus-Q4_0.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q4_0.gguf) | Q4_0 | 8.41GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Phi-4-reasoning-plus-IQ4_NL.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-IQ4_NL.gguf) | IQ4_NL | 8.38GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Phi-4-reasoning-plus-Q3_K_XL.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q3_K_XL.gguf) | Q3_K_XL | 8.38GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Phi-4-reasoning-plus-IQ4_XS.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-IQ4_XS.gguf) | IQ4_XS | 7.94GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Phi-4-reasoning-plus-Q3_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q3_K_L.gguf) | Q3_K_L | 7.93GB | false | Lower quality but usable, good for low RAM availability. |
| [Phi-4-reasoning-plus-Q3_K_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q3_K_M.gguf) | Q3_K_M | 7.36GB | false | Low quality. |
| [Phi-4-reasoning-plus-IQ3_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-IQ3_M.gguf) | IQ3_M | 6.91GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Phi-4-reasoning-plus-Q3_K_S.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q3_K_S.gguf) | Q3_K_S | 6.50GB | false | Low quality, not recommended. |
| [Phi-4-reasoning-plus-IQ3_XS.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-IQ3_XS.gguf) | IQ3_XS | 6.25GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Phi-4-reasoning-plus-Q2_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q2_K_L.gguf) | Q2_K_L | 6.05GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Phi-4-reasoning-plus-IQ3_XXS.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-IQ3_XXS.gguf) | IQ3_XXS | 5.85GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Phi-4-reasoning-plus-Q2_K.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-Q2_K.gguf) | Q2_K | 5.55GB | false | Very low quality but surprisingly usable. |
| [Phi-4-reasoning-plus-IQ2_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-IQ2_M.gguf) | IQ2_M | 5.11GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Phi-4-reasoning-plus-IQ2_S.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-plus-GGUF/blob/main/microsoft_Phi-4-reasoning-plus-IQ2_S.gguf) | IQ2_S | 4.73GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/microsoft_Phi-4-reasoning-plus-GGUF --include "microsoft_Phi-4-reasoning-plus-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/microsoft_Phi-4-reasoning-plus-GGUF --include "microsoft_Phi-4-reasoning-plus-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (microsoft_Phi-4-reasoning-plus-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
kostiantynk1205/7f149e5b-5915-41ad-8096-d4ace5be4f69 | kostiantynk1205 | 2025-05-01T03:30:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:28220cd188a438e8_train_data.json",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | 2025-05-01T03:29:53Z | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- 28220cd188a438e8_train_data.json
base_model: microsoft/phi-1_5
model-index:
- name: kostiantynk1205/7f149e5b-5915-41ad-8096-d4ace5be4f69
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk1205/7f149e5b-5915-41ad-8096-d4ace5be4f69
This model was trained from scratch on the /workspace/input_data/28220cd188a438e8_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
BKM1804/SmolLM-135M-dpo-tuned | BKM1804 | 2025-05-01T03:28:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/SmolLM-135M",
"base_model:finetune:unsloth/SmolLM-135M",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T03:28:17Z | ---
base_model: unsloth/SmolLM-135M
library_name: transformers
model_name: SmolLM-135M-dpo-tuned
tags:
- generated_from_trainer
- unsloth
- trl
- dpo
licence: license
---
# Model Card for SmolLM-135M-dpo-tuned
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="BKM1804/SmolLM-135M-dpo-tuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/buikhacminh1804/dpo-train/runs/gkinmbyl)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DanielNRU/pollen-ner-cycle-50 | DanielNRU | 2025-05-01T03:27:44Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:adapter:DeepPavlov/rubert-base-cased",
"region:us"
] | null | 2025-04-23T09:47:43Z | ---
library_name: peft
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-cycle-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-cycle-50
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9515
- Precision: 0.0223
- Recall: 0.0774
- F1: 0.0347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 7 | 1.9937 | 0.0239 | 0.0928 | 0.0380 |
| No log | 2.0 | 14 | 1.9515 | 0.0223 | 0.0774 | 0.0347 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
izzcw/dpo_crafting_lora_from_base | izzcw | 2025-05-01T03:26:19Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-04-30T20:34:22Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- trl
- dpo
- generated_from_trainer
model-index:
- name: dpo_crafting_lora_from_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_crafting_lora_from_base
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the crafting_dpo_data dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
Link-Viral-Arovi-Nusrat-Ridhi-Oh/Watch-Arovi.Nusrat.ridhi.viral.video.link | Link-Viral-Arovi-Nusrat-Ridhi-Oh | 2025-05-01T03:25:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T03:25:15Z | ---
license: apache-2.0
---
<a data-target="animated-image.originalLink" rel="nofollow" href="https://t.co/RqB7gZez8s"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
cvoffer/0727fb95-44b3-40eb-83e0-3245f20e9b00 | cvoffer | 2025-05-01T03:23:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T02:11:30Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0727fb95-44b3-40eb-83e0-3245f20e9b00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-64k
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- aec5b1777769e358_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aec5b1777769e358_train_data.json
type:
field_instruction: Prompt
field_output: Upsampled
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: cvoffer/0727fb95-44b3-40eb-83e0-3245f20e9b00
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/aec5b1777769e358_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6fecb4a6-3fc6-4ba4-95da-6c16add53bb7
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 6fecb4a6-3fc6-4ba4-95da-6c16add53bb7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0727fb95-44b3-40eb-83e0-3245f20e9b00
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2755 | 0.0169 | 150 | 1.3034 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlx-community/Phi-4-reasoning-plus-8bit | mlx-community | 2025-05-01T03:18:20Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"phi3",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"reasoning",
"text-generation",
"en",
"base_model:microsoft/Phi-4-reasoning-plus",
"base_model:quantized:microsoft/Phi-4-reasoning-plus",
"license:mit",
"8-bit",
"region:us"
] | text-generation | 2025-05-01T03:10:32Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-reasoning-plus/resolve/main/LICENSE
language:
- en
base_model: microsoft/Phi-4-reasoning-plus
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- reasoning
- mlx
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: What is the derivative of x^2?
library_name: mlx
---
# mlx-community/Phi-4-reasoning-plus-8bit
This model [mlx-community/Phi-4-reasoning-plus-8bit](https://huggingface.co/mlx-community/Phi-4-reasoning-plus-8bit) was
converted to MLX format from [microsoft/Phi-4-reasoning-plus](https://huggingface.co/microsoft/Phi-4-reasoning-plus)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Phi-4-reasoning-plus-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
zaydzuhri/vanilla-340M-4096-model | zaydzuhri | 2025-05-01T03:17:41Z | 23 | 0 | null | [
"safetensors",
"transformer",
"arxiv:2504.20966",
"region:us"
] | null | 2025-04-21T07:15:55Z | # This model is from the paper arxiv.org/abs/2504.20966
# Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
See code: https://github.com/zaydzuhri/softpick-attention
This model is only usable through these repositories:
https://github.com/zaydzuhri/flash-linear-attention/tree/softpick-attention
https://github.com/zaydzuhri/flame/tree/softpick-attention |
zaydzuhri/softpick-340M-4096-model | zaydzuhri | 2025-05-01T03:17:18Z | 42 | 1 | null | [
"safetensors",
"transformer",
"arxiv:2504.20966",
"region:us"
] | null | 2025-04-19T09:06:00Z | # This model is from the paper arxiv.org/abs/2504.20966
# Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
See code: https://github.com/zaydzuhri/softpick-attention
This model is only usable through these repositories:
https://github.com/zaydzuhri/flash-linear-attention/tree/softpick-attention
https://github.com/zaydzuhri/flame/tree/softpick-attention
<div align="center">
# 🔥 Flame: Flash Linear Attention Made Easy
</div>
Welcome to 🔥 `flame`, a minimal and efficient framework built on `torchtitan` for training Flash Linear Attention (FLA) models (and more broadly, arbitrary autoregressive language models) with blazing efficiency.
**Feature Highlights:**
- 🚀 Minimal, easy-to-use, extensible training framework
- 🤗 Seamless integration with `fla` and `transformers`
- 🔄 Zero-cost data preprocessing: online tokenization, dataset shuffling, and multiple datasets support
- 🔮 4D parallelism (coming soon)
## Setup
To get started, clone the `flame` repository and install the required dependencies:
```bash
git clone https://github.com/fla-org/flame.git
cd flame
pip install .
```
`flame` manages minimal dependencies, only including `fla` and `torchtitan` as submodules.
After installation, initialize and update the submodules:
```sh
git submodule update --init --recursive
```
## Dataset Preparation
To download the dataset to your local disk, create a new Python file with the following content and execute it:
```py
from datasets import load_dataset
# load fineweb-edu with parallel processing
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="default", num_proc=64, cache_dir="/your/cache/path")
# or load a subset with roughly 100B tokens, suitable for small- or medium-sized experiments
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="sample-100BT", num_proc=64, cache_dir="/your/cache/path")
```
## Training Recipes
Here's an example of training a 340M FLA Transformer model with a LLaMA-like architecture from scratch on a 100BT subset of the Fineweb-edu corpus in streaming mode.
> [!WARNING]
> If the dataset is not downloaded beforehand, the streaming mode will attempt to fetch it from a remote server and download it on-the-fly, which can be highly unstable during training due to network issues.
> For stable training, ensure the dataset is downloaded locally (see [**Dataset Preparation**](#dataset-preparation)). Otherwise, we assume you are only testing the new corpus.
```sh
bash train.sh \
--job.config_file flame/models/fla.toml \
--job.dump_folder exp/transformer-340M-4K-10B/batch1.seqlen65536.context4096.warmup1024.update1.steps20480.lr3e-4.cosine \
--model.config configs/transformer_340M.json \
--model.tokenizer_path fla-hub/transformer-1.3B-100B \
--optimizer.name AdamW \
--optimizer.eps 1e-15 \
--optimizer.lr 3e-4 \
--lr_scheduler.warmup_steps 1024 \
--lr_scheduler.lr_min 0.1 \
--lr_scheduler.decay_type cosine \
--training.batch_size 1 \
--training.seq_len 65536 \
--training.context_len 4096 \
--training.varlen \
--training.gradient_accumulation_steps 1 \
--training.steps 20480 \
--training.max_norm 1.0 \
--training.skip_nan_inf \
--training.dataset HuggingFaceFW/fineweb-edu \
--training.dataset_name sample-100BT \
--training.dataset_split train \
--training.streaming \
--training.num_workers 32 \
--training.prefetch_factor 2 \
--training.seed 42 \
--training.compile \
--checkpoint.interval 2048 \
--checkpoint.load_step -1 \
--checkpoint.keep_latest_k 2 \
--metrics.log_freq 1
```
You can specify the number of GPUs by setting the environment variable `NGPU`, which defaults to 8.
**For single-GPU debugging, set `NGPU=1`.**
We provide several [config files](https://github.com/fla-org/flame/tree/main/configs) for different models.
By default, the learning rate is set to 3e-4 with a cosine scheduler. Other schedulers, such as WSD (wsd), are also supported.
**Key parameters:**
- `--lr_scheduler.decay_ratio`: The proportion of the steps allocated to the decay phase. The learning rate will remain stable after the warmup period and only start decaying during the last `decay_ratio` portion of the total training steps, which is known as the Warmup-Stable-Decay (WSD) schedule.
- `--lr_scheduler.warmup_steps`: The number of steps for the learning rate warmup phase.
- `--training.steps`: Total number of training steps.
- `--training.batch_size`: Batch size per device, must be 1 if `--training.varlen` is set.
- `--training.seq_len`: The length of each sequence in the batch, which is concatenated from multiple samples.
- `--training.context_len`: The max allowed length of a sample. For non-varlen mode, this is equivalent to `seq_len`.
- `--training.varlen`: Whether to conduct variable-length sequence training.
- `--training.gradient_accumulation_steps`: Number of gradient accumulation steps.
> [!WARNING]
> The total number of tokens processed per batch, referred to as `global_batch_size`, is calculated as batch_size × gradient_accumulation_steps × num_gpus.
> Each step processes `global_batch_size * seq_len` tokens.
> Monitor the value of `global_batch_size`, `warmup_steps`, and `steps` carefully when modifying any of the hyperparameters!
For a detailed explanation of all parameters, run:
```sh
bash train.sh -h
```
<details>
<summary>Usage</summary>
```py
options:
-h, --help show this help message and exit
--job.config_file JOB.CONFIG_FILE
Job config file
--job.dump_folder JOB.DUMP_FOLDER
Folder to dump job outputs
--job.description JOB.DESCRIPTION
Description of the job
--job.use_for_integration_test
Add this config to the integration test suite
--job.print_args Print the args to terminal
--model.config MODEL.CONFIG
Path to the model config
--model.norm_type MODEL.NORM_TYPE
Type of layer normalization to use [layernorm,
np_layernorm, rmsnorm, fused_rmsnorm]
--model.tokenizer_path MODEL.TOKENIZER_PATH
Tokenizer path
--profiling.enable_profiling
Whether to enable pytorch profiler
--profiling.save_traces_folder PROFILING.SAVE_TRACES_FOLDER
Trace files location
--profiling.profile_freq PROFILING.PROFILE_FREQ
How often to collect profiler traces, in iterations
--profiling.enable_memory_snapshot
Whether to dump memory snapshot
--profiling.save_memory_snapshot_folder PROFILING.SAVE_MEMORY_SNAPSHOT_FOLDER
Memeory snapshot files location
--optimizer.name OPTIMIZER.NAME
Optimizer to use
--optimizer.eps OPTIMIZER.EPS
Epsilon value for the optimizer.
--optimizer.fused Whether the fused implementation(CUDA only) is used.
--optimizer.scheduler {wsd,cosine,linear}
Scheduler to use. Currently supported: wsd, cosine,
and linear.
--optimizer.lr OPTIMIZER.LR
Learning rate to use
--optimizer.min_lr_ratio OPTIMIZER.MIN_LR_RATIO
Min lr ratio for lr scheduler
--optimizer.early_step_in_backward
Whether to apply optimizer in the backward. Caution,
optimizer_in_backward is not compatible with gradients
clipping, users should not call
register_post_accumulate_grad_hook after the optimizer
is built.
--training.batch_size TRAINING.BATCH_SIZE
Batch size
--training.seq_len TRAINING.SEQ_LEN
Sequence length
--training.context_len TRAINING.CONTEXT_LEN
Max length allowed for each sequence
--training.varlen Whether to take sequences of variable length as input
--training.warmup_steps TRAINING.WARMUP_STEPS
Steps for lr scheduler warmup, normally 1/5 of
--training.steps
--training.gradient_accumulation_steps TRAINING.GRADIENT_ACCUMULATION_STEPS
Number of steps to accumulate gradients before
updating parameters
--training.steps TRAINING.STEPS
How many train steps to run
--training.max_norm TRAINING.MAX_NORM
Max norm for gradient clipping
--training.skip_nan_inf
Skip batch updates when NaN or INF gradients are
encountered during training
--training.dataset TRAINING.DATASET
Dataset to use, with comma separated values
--training.dataset_name TRAINING.DATASET_NAME
The name of the dataset config, with comma separated
values if provided
--training.dataset_split TRAINING.DATASET_SPLIT
Dataset split to use, with comma separated values if
provided
--training.data_dir TRAINING.DATA_DIR
Data dirs to use, with comma separated values if
provided
--training.data_files TRAINING.DATA_FILES
Data files to use, with comma separated values if
provided
--training.data_probs TRAINING.DATA_PROBS
Data sampling probabilities, with comma separated
values if provided
--training.streaming Whether to load dataset in streaming mode, used for
huge dataset
--training.num_workers TRAINING.NUM_WORKERS
Number of subprocesses to use for data loading. 0
means that the data will be loaded in the main
process.
--training.prefetch_factor TRAINING.PREFETCH_FACTOR
Number of batches loaded in advance by each worker.2
means there will be a total of 2 * num_workers batches
prefetched across all workers.
--training.data_parallel_replicate_degree TRAINING.DATA_PARALLEL_REPLICATE_DEGREE
The `data_parallel_replicate_degree` argument
specifies the degree of data parallelism for weight
replication. When this value is greater than 1,
weights will be replicated across
`data_parallel_replicate_degree` ranks. If
`data_parallel_shard_degree` is also greater than 1,
the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is DDP (Distributed Data Parallelism). 1 means
disabled.
--training.data_parallel_shard_degree TRAINING.DATA_PARALLEL_SHARD_DEGREE
The `data_parallel_shard_degree` argument specifies
the degree of data parallelism for weight sharding.
When this value is greater than 1, weights will be
sharded across `data_parallel_shard_degree` ranks. If
`data_parallel_replicate_degree` is also greater than
1, the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is FSDP (Fully Sharded Data Parallelism). -1
means leftover ranks will be used (After
DP_REPLICATE/SP/PP). Note that only
`data_parallel_shard_degree` can be negative. 1 means
disabled.
--training.enable_cpu_offload
Whether to apply CPU offloading of parameters,
gradients, and optimizer states in FSDP
--training.tensor_parallel_degree TRAINING.TENSOR_PARALLEL_DEGREE
Tensor Parallelism degree. 1 means disabled.
--training.disable_loss_parallel
Whether to apply loss parallel when sequence parallel
is enabled
--training.mixed_precision_param {bfloat16,float32}
torch dtype to use for parameters when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.mixed_precision_reduce {float32}
torch dtype to use for reductions when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.compile Whether to compile the model
--training.gc_freq TRAINING.GC_FREQ
Python garbage control scheduling interval, in steps
--training.seed TRAINING.SEED
Choose the base RNG seed used for training
--training.deterministic
Use deterministic algorithms wherever possible, may be
slower
--metrics.log_freq METRICS.LOG_FREQ
How often to log metrics to TensorBoard, in iterations
--metrics.enable_tensorboard
Whether to log metrics to TensorBoard
--metrics.disable_color_printing
Whether to disable color printing in logs
--metrics.save_tb_folder METRICS.SAVE_TB_FOLDER
Folder to dump TensorBoard states
--metrics.rank_0_only
Whether to save TensorBoard metrics only for rank 0 or
for all ranks. When pipeline_parallel_degree is > 1,
this option uses the 0th rank of the last stage
pipeline group, which is the only stage that computes
loss metrics.
--metrics.enable_wandb
Whether to log metrics to Weights & Biases
--experimental.enable_async_tensor_parallel
Whether to apply async tensor parallel (currently only
effective when compile is enabled)
--experimental.pipeline_parallel_degree EXPERIMENTAL.PIPELINE_PARALLEL_DEGREE
Pipeline Parallelism degree, or number of ranks. 1
means disabled. If using looped schedules, this still
specifies the number of physical ranks, not the number
of stages. Stages per rank are inferred from split
points degree, and schedule.
--experimental.pipeline_parallel_split_points EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS [EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS ...]
Specify comma-separated names of modules to use as the
beginning of a split point. e.g. "layers.0,layers.2"
will cause the model to be split into 3 stages, the
first containing all the layers up to layers.0, the
second containing layers.0 and up to layers.2, the
third containing layers.2 and all the remaining
layers. Note: fully-automated splitting may be enabled
in the future, but currently the split points must be
specified manually.
--experimental.pipeline_parallel_schedule EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE
Specify the Pipeline Parallel schedule to use. The
supported schedules are: https://github.com/pytorch/py
torch/blob/de4c2a3b4e89d96334dc678d1c3f2ae51a6630a0/to
rch/distributed/pipelining/schedules.py#L2161. The
schedule must be compatible with the split points and
stages_per_rank. Looped schedules (e.g.
Interleaved1F1B) require specifying
pipeline_parallel_degree = number of ranks, and
split_points = number of stages - 1
--experimental.pipeline_parallel_schedule_csv EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE_CSV
Specify the path to the pipeline parallel schedule csv
file to use. The pipeline_parallel_schedule argument
must be either PipelineScheduleSingle,
PipelineScheduleMulti, or _PipelineScheduleRuntime.
--experimental.pipeline_parallel_microbatches EXPERIMENTAL.PIPELINE_PARALLEL_MICROBATCHES
How many microbatches to split the global training
batch into when using pipeline parallelism. The global
training batch size must be evenly divisible by the
number of microbatches. The default value will be the
number of pipeline stages, if unspecified.
--experimental.enable_compiled_autograd
Enable CompiledAutograd to compile the backward.
--experimental.context_parallel_degree EXPERIMENTAL.CONTEXT_PARALLEL_DEGREE
Context parallelism degree. 1 means disabled.
--experimental.context_parallel_rotate_method EXPERIMENTAL.CONTEXT_PARALLEL_ROTATE_METHOD
The collective to use in context parallel SDPA for kv
shards exchange. 'allgather' means to all-gather all
kv shards on ranks after the first sub-SDPA
computation, 'alltoall' means to all-to-all shuffle
the kv shards. The default value is 'allgather'.
--checkpoint.enable_checkpoint
Whether to enable checkpoint
--checkpoint.folder CHECKPOINT.FOLDER
The folder to store the checkpoints. When
enable_checkpoint is set to true, checkpoints will be
in {--job.dump_folder}/{--checkpoint.folder}.
--checkpoint.interval_type CHECKPOINT.INTERVAL_TYPE
Checkpointing interval unit of measurement ['step',
'seconds']
--checkpoint.interval CHECKPOINT.INTERVAL
Checkpointing interval, in steps or seconds depending
on --checkpoint.interval_type
--checkpoint.model_weights_only
When model_weights_only=True, only model weights will
be saved at the end of training. With this,
checkpoints can be loaded using `torch.load(...,
weights_only=True)` after conversion. When
model_weights_only=False, the full checkpoint will be
saved. A full checkpoint includes model, optimizer and
train_state, which can be used to resume training. The
default value is false.
--checkpoint.export_dtype {float16,bfloat16,float32}
Converts to the specified precision when training
completes and model_weights_only=true. Currently
supports float32, float16, and bfloat16. The default
value is float32.
--checkpoint.create_seed_checkpoint
Initializes the full model without applying
parallelisms, and then saves it as a seed checkpoint.
Note: requires user to call train.py without
specifying any parallelisms, e.g. NGPU=1. Could be
implemented as a separate script, but this way shares
more code.
--checkpoint.async_mode CHECKPOINT.ASYNC_MODE
Which async checkpoint mode to use. Currently there
are 3 different modes. 1. "disabled": synchronized
checkpointing will be used. 2. "async":
torch.distributed.checkpoint.async_save will be used.
1. "async_with_pinned_mem": this option utilizes a
dedicated pinned memory space and creates a separate
process for faster GPU->CPU transfer performance and
eliminating GIL contention. The cost is increased CPU
memory usage. If insufficient CPU memory is available,
performance may degrade due to memory paging. For most
users, "async" should suffice as the performance
overhead is typically small (on the order of tens of
seconds) compared to checkpointing frequency. This
mode can be employed to pursue near-zero checkpointing
times (e.g., < 1 second) given appropriate hardware
support such as ample CPU memory and fast PCIe.
"disabled" is the default mode.
--checkpoint.keep_latest_k CHECKPOINT.KEEP_LATEST_K
Keeps only the latest k checkpoints, and purging older
ones. If 0, keep all checkpoints. 0 is the default
value.
--checkpoint.load_step CHECKPOINT.LOAD_STEP
Load the checkpoint at the specified step. If -1, load
the latest checkpoint.
--float8.enable_float8_linear
If true, swaps `torch.nn.Linear` with `Float8Linear`.
This feature requires you to install 'torchao' which
can be found here: https://github.com/pytorch/ao
--float8.enable_fsdp_float8_all_gather
Whether enable float8 all-gather in FSDP
--float8.precompute_float8_dynamic_scale_for_fsdp
Whether precompute float8 scales dynamically for FSDP
--float8.scaling_type_input {dynamic,delayed}
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_weight FLOAT8.SCALING_TYPE_WEIGHT
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_grad_output FLOAT8.SCALING_TYPE_GRAD_OUTPUT
float8 scaling for input, dynamic (default) or delayed
--comm.init_timeout_seconds COMM.INIT_TIMEOUT_SECONDS
Timeout for communication operations, during
initialization and first train step.
--comm.train_timeout_seconds COMM.TRAIN_TIMEOUT_SECONDS
Timeout for communication operations after the first
train step -- usually a tighter bound than during
initialization.
--comm.trace_buf_size COMM.TRACE_BUF_SIZE
Flight recorder ring buffer size, >0 means recording
by default, 0 means disabled
--memory_estimation.enabled
Whether to estimate memory usage for FSDP
--memory_estimation.disable_fake_mode
Whether to estimate memory under FakeTensorMode
```
</details>
### Training with `torch.compile`
Starting from `torch 2.0`, `torch.compile` has been introduced as a new feature to seamlessly accelerate training processes.
In `flame`, one can simply enable `torch.compile` by adding `--training.compile` flag to your training script.
However, `fla` has integrated numerous fused kernels for acceleration, which may potentially conflict with `torch.compile`.
We are actively working on resolving these issues to make compilation transparent to users.
In the meantime, please ensure you are using the latest dependencies.
Specifically, **we recommend using `torch>=2.6` and `triton>=3.0`**.
### Training with multiple datasets
If you wish to train a model with all-round capabilities (e.g., code, math, and multilingual ability), it's necessary to train on multiple datasets.
`flame` allows training with multiple datasets easily.
For example, you can specify the following arguments to train on 6 datasets with different proportions:
```sh
--training.dataset HuggingFaceFW/fineweb-edu,opencsg/Fineweb-Edu-Chinese-V2.1,OpenCoder-LLM/opc-fineweb-code-corpus,math-ai/AutoMathText,EleutherAI/proof-pile-2,OpenCoder-LLM/opc-fineweb-math-corpus \
--training.data_probs 0.6,0.15,0.15,0.014,0.058,0.028 \
```
### ~Finalizing training~
> [!NOTE]
> We have done this conversion automatically in the training script since our latest updates.
Once training is complete, you may want to convert the distributed checkpoints (DCPs) into the 🤗 format for broader use.
To facilitate this, we provide a straightforward conversion script:
```sh
python -m flame.utils.convert_dcp_to_hf --path <path_to_model> --step <step> --config <path_to_config> --tokenizer <path_to_tokenizer>
```
After this, your model will be in the 🤗 format, ready to be shared or deployed.
You can then easily publish your model using the `huggingface_hub` for wider accessibility.
### Continual training
If you wish to build upon a strong pre-trained model (in 🤗 format) and continue training, we also offer a script to convert the 🤗 format model back into DCP format.
This allows you to seamlessly resume training with `flame`.
```sh
python -m flame.utils.convert_hf_to_dcp --model <path_to_hf> --checkpoint <path_to_dcp/checkpoint/step-0>
```
Here, `<path_to_dcp>` is the directory where your distributed checkpoints will be stored.
The checkpoint is intentionally saved at `<step-0>` within the checkpoint folder to ensure it is loadable by `flame` during the initial training step, similar to how a seed checkpoint is handled.
Once the conversion is complete, you can proceed with training using `flame` as usual, continuing from where the pretrained model left off.
## Multi-node training
If you have access to multi-node GPUs, consider leveraging them for optimal performance.
This process is straightforward and well-documented in the PyTorch [docs](https://pytorch.org/docs/stable/elastic/run.html).
To set up multi-node training:
* Set the environment variables `MASTER_ADDR=<ip>` and `MASTER_PORT=<port>` before running the training script across all nodes.
* If you're using a job scheduler like Slurm, it will handle these variables for you.
`torchtitan` provides a [Slurm script](https://github.com/pytorch/torchtitan/blob/main/multinode_trainer.slurm) for multi-node training, which you can use as a reference or starting point.
|
AbsolemSnoq/1 | AbsolemSnoq | 2025-05-01T03:16:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T03:16:48Z | ---
license: apache-2.0
---
|
zaydzuhri/softpick-340M-4096-batch16-steps100000 | zaydzuhri | 2025-05-01T03:16:40Z | 1 | 0 | null | [
"safetensors",
"transformer",
"arxiv:2504.20966",
"region:us"
] | null | 2025-04-19T08:28:25Z | # This model is from the paper arxiv.org/abs/2504.20966
# Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
See code: https://github.com/zaydzuhri/softpick-attention
This model is only usable through these repositories:
https://github.com/zaydzuhri/flash-linear-attention/tree/softpick-attention
https://github.com/zaydzuhri/flame/tree/softpick-attention
<div align="center">
# 🔥 Flame: Flash Linear Attention Made Easy
</div>
Welcome to 🔥 `flame`, a minimal and efficient framework built on `torchtitan` for training Flash Linear Attention (FLA) models (and more broadly, arbitrary autoregressive language models) with blazing efficiency.
**Feature Highlights:**
- 🚀 Minimal, easy-to-use, extensible training framework
- 🤗 Seamless integration with `fla` and `transformers`
- 🔄 Zero-cost data preprocessing: online tokenization, dataset shuffling, and multiple datasets support
- 🔮 4D parallelism (coming soon)
## Setup
To get started, clone the `flame` repository and install the required dependencies:
```bash
git clone https://github.com/fla-org/flame.git
cd flame
pip install .
```
`flame` manages minimal dependencies, only including `fla` and `torchtitan` as submodules.
After installation, initialize and update the submodules:
```sh
git submodule update --init --recursive
```
## Dataset Preparation
To download the dataset to your local disk, create a new Python file with the following content and execute it:
```py
from datasets import load_dataset
# load fineweb-edu with parallel processing
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="default", num_proc=64, cache_dir="/your/cache/path")
# or load a subset with roughly 100B tokens, suitable for small- or medium-sized experiments
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="sample-100BT", num_proc=64, cache_dir="/your/cache/path")
```
## Training Recipes
Here's an example of training a 340M FLA Transformer model with a LLaMA-like architecture from scratch on a 100BT subset of the Fineweb-edu corpus in streaming mode.
> [!WARNING]
> If the dataset is not downloaded beforehand, the streaming mode will attempt to fetch it from a remote server and download it on-the-fly, which can be highly unstable during training due to network issues.
> For stable training, ensure the dataset is downloaded locally (see [**Dataset Preparation**](#dataset-preparation)). Otherwise, we assume you are only testing the new corpus.
```sh
bash train.sh \
--job.config_file flame/models/fla.toml \
--job.dump_folder exp/transformer-340M-4K-10B/batch1.seqlen65536.context4096.warmup1024.update1.steps20480.lr3e-4.cosine \
--model.config configs/transformer_340M.json \
--model.tokenizer_path fla-hub/transformer-1.3B-100B \
--optimizer.name AdamW \
--optimizer.eps 1e-15 \
--optimizer.lr 3e-4 \
--lr_scheduler.warmup_steps 1024 \
--lr_scheduler.lr_min 0.1 \
--lr_scheduler.decay_type cosine \
--training.batch_size 1 \
--training.seq_len 65536 \
--training.context_len 4096 \
--training.varlen \
--training.gradient_accumulation_steps 1 \
--training.steps 20480 \
--training.max_norm 1.0 \
--training.skip_nan_inf \
--training.dataset HuggingFaceFW/fineweb-edu \
--training.dataset_name sample-100BT \
--training.dataset_split train \
--training.streaming \
--training.num_workers 32 \
--training.prefetch_factor 2 \
--training.seed 42 \
--training.compile \
--checkpoint.interval 2048 \
--checkpoint.load_step -1 \
--checkpoint.keep_latest_k 2 \
--metrics.log_freq 1
```
You can specify the number of GPUs by setting the environment variable `NGPU`, which defaults to 8.
**For single-GPU debugging, set `NGPU=1`.**
We provide several [config files](https://github.com/fla-org/flame/tree/main/configs) for different models.
By default, the learning rate is set to 3e-4 with a cosine scheduler. Other schedulers, such as WSD (wsd), are also supported.
**Key parameters:**
- `--lr_scheduler.decay_ratio`: The proportion of the steps allocated to the decay phase. The learning rate will remain stable after the warmup period and only start decaying during the last `decay_ratio` portion of the total training steps, which is known as the Warmup-Stable-Decay (WSD) schedule.
- `--lr_scheduler.warmup_steps`: The number of steps for the learning rate warmup phase.
- `--training.steps`: Total number of training steps.
- `--training.batch_size`: Batch size per device, must be 1 if `--training.varlen` is set.
- `--training.seq_len`: The length of each sequence in the batch, which is concatenated from multiple samples.
- `--training.context_len`: The max allowed length of a sample. For non-varlen mode, this is equivalent to `seq_len`.
- `--training.varlen`: Whether to conduct variable-length sequence training.
- `--training.gradient_accumulation_steps`: Number of gradient accumulation steps.
> [!WARNING]
> The total number of tokens processed per batch, referred to as `global_batch_size`, is calculated as batch_size × gradient_accumulation_steps × num_gpus.
> Each step processes `global_batch_size * seq_len` tokens.
> Monitor the value of `global_batch_size`, `warmup_steps`, and `steps` carefully when modifying any of the hyperparameters!
For a detailed explanation of all parameters, run:
```sh
bash train.sh -h
```
<details>
<summary>Usage</summary>
```py
options:
-h, --help show this help message and exit
--job.config_file JOB.CONFIG_FILE
Job config file
--job.dump_folder JOB.DUMP_FOLDER
Folder to dump job outputs
--job.description JOB.DESCRIPTION
Description of the job
--job.use_for_integration_test
Add this config to the integration test suite
--job.print_args Print the args to terminal
--model.config MODEL.CONFIG
Path to the model config
--model.norm_type MODEL.NORM_TYPE
Type of layer normalization to use [layernorm,
np_layernorm, rmsnorm, fused_rmsnorm]
--model.tokenizer_path MODEL.TOKENIZER_PATH
Tokenizer path
--profiling.enable_profiling
Whether to enable pytorch profiler
--profiling.save_traces_folder PROFILING.SAVE_TRACES_FOLDER
Trace files location
--profiling.profile_freq PROFILING.PROFILE_FREQ
How often to collect profiler traces, in iterations
--profiling.enable_memory_snapshot
Whether to dump memory snapshot
--profiling.save_memory_snapshot_folder PROFILING.SAVE_MEMORY_SNAPSHOT_FOLDER
Memeory snapshot files location
--optimizer.name OPTIMIZER.NAME
Optimizer to use
--optimizer.eps OPTIMIZER.EPS
Epsilon value for the optimizer.
--optimizer.fused Whether the fused implementation(CUDA only) is used.
--optimizer.scheduler {wsd,cosine,linear}
Scheduler to use. Currently supported: wsd, cosine,
and linear.
--optimizer.lr OPTIMIZER.LR
Learning rate to use
--optimizer.min_lr_ratio OPTIMIZER.MIN_LR_RATIO
Min lr ratio for lr scheduler
--optimizer.early_step_in_backward
Whether to apply optimizer in the backward. Caution,
optimizer_in_backward is not compatible with gradients
clipping, users should not call
register_post_accumulate_grad_hook after the optimizer
is built.
--training.batch_size TRAINING.BATCH_SIZE
Batch size
--training.seq_len TRAINING.SEQ_LEN
Sequence length
--training.context_len TRAINING.CONTEXT_LEN
Max length allowed for each sequence
--training.varlen Whether to take sequences of variable length as input
--training.warmup_steps TRAINING.WARMUP_STEPS
Steps for lr scheduler warmup, normally 1/5 of
--training.steps
--training.gradient_accumulation_steps TRAINING.GRADIENT_ACCUMULATION_STEPS
Number of steps to accumulate gradients before
updating parameters
--training.steps TRAINING.STEPS
How many train steps to run
--training.max_norm TRAINING.MAX_NORM
Max norm for gradient clipping
--training.skip_nan_inf
Skip batch updates when NaN or INF gradients are
encountered during training
--training.dataset TRAINING.DATASET
Dataset to use, with comma separated values
--training.dataset_name TRAINING.DATASET_NAME
The name of the dataset config, with comma separated
values if provided
--training.dataset_split TRAINING.DATASET_SPLIT
Dataset split to use, with comma separated values if
provided
--training.data_dir TRAINING.DATA_DIR
Data dirs to use, with comma separated values if
provided
--training.data_files TRAINING.DATA_FILES
Data files to use, with comma separated values if
provided
--training.data_probs TRAINING.DATA_PROBS
Data sampling probabilities, with comma separated
values if provided
--training.streaming Whether to load dataset in streaming mode, used for
huge dataset
--training.num_workers TRAINING.NUM_WORKERS
Number of subprocesses to use for data loading. 0
means that the data will be loaded in the main
process.
--training.prefetch_factor TRAINING.PREFETCH_FACTOR
Number of batches loaded in advance by each worker.2
means there will be a total of 2 * num_workers batches
prefetched across all workers.
--training.data_parallel_replicate_degree TRAINING.DATA_PARALLEL_REPLICATE_DEGREE
The `data_parallel_replicate_degree` argument
specifies the degree of data parallelism for weight
replication. When this value is greater than 1,
weights will be replicated across
`data_parallel_replicate_degree` ranks. If
`data_parallel_shard_degree` is also greater than 1,
the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is DDP (Distributed Data Parallelism). 1 means
disabled.
--training.data_parallel_shard_degree TRAINING.DATA_PARALLEL_SHARD_DEGREE
The `data_parallel_shard_degree` argument specifies
the degree of data parallelism for weight sharding.
When this value is greater than 1, weights will be
sharded across `data_parallel_shard_degree` ranks. If
`data_parallel_replicate_degree` is also greater than
1, the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is FSDP (Fully Sharded Data Parallelism). -1
means leftover ranks will be used (After
DP_REPLICATE/SP/PP). Note that only
`data_parallel_shard_degree` can be negative. 1 means
disabled.
--training.enable_cpu_offload
Whether to apply CPU offloading of parameters,
gradients, and optimizer states in FSDP
--training.tensor_parallel_degree TRAINING.TENSOR_PARALLEL_DEGREE
Tensor Parallelism degree. 1 means disabled.
--training.disable_loss_parallel
Whether to apply loss parallel when sequence parallel
is enabled
--training.mixed_precision_param {bfloat16,float32}
torch dtype to use for parameters when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.mixed_precision_reduce {float32}
torch dtype to use for reductions when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.compile Whether to compile the model
--training.gc_freq TRAINING.GC_FREQ
Python garbage control scheduling interval, in steps
--training.seed TRAINING.SEED
Choose the base RNG seed used for training
--training.deterministic
Use deterministic algorithms wherever possible, may be
slower
--metrics.log_freq METRICS.LOG_FREQ
How often to log metrics to TensorBoard, in iterations
--metrics.enable_tensorboard
Whether to log metrics to TensorBoard
--metrics.disable_color_printing
Whether to disable color printing in logs
--metrics.save_tb_folder METRICS.SAVE_TB_FOLDER
Folder to dump TensorBoard states
--metrics.rank_0_only
Whether to save TensorBoard metrics only for rank 0 or
for all ranks. When pipeline_parallel_degree is > 1,
this option uses the 0th rank of the last stage
pipeline group, which is the only stage that computes
loss metrics.
--metrics.enable_wandb
Whether to log metrics to Weights & Biases
--experimental.enable_async_tensor_parallel
Whether to apply async tensor parallel (currently only
effective when compile is enabled)
--experimental.pipeline_parallel_degree EXPERIMENTAL.PIPELINE_PARALLEL_DEGREE
Pipeline Parallelism degree, or number of ranks. 1
means disabled. If using looped schedules, this still
specifies the number of physical ranks, not the number
of stages. Stages per rank are inferred from split
points degree, and schedule.
--experimental.pipeline_parallel_split_points EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS [EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS ...]
Specify comma-separated names of modules to use as the
beginning of a split point. e.g. "layers.0,layers.2"
will cause the model to be split into 3 stages, the
first containing all the layers up to layers.0, the
second containing layers.0 and up to layers.2, the
third containing layers.2 and all the remaining
layers. Note: fully-automated splitting may be enabled
in the future, but currently the split points must be
specified manually.
--experimental.pipeline_parallel_schedule EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE
Specify the Pipeline Parallel schedule to use. The
supported schedules are: https://github.com/pytorch/py
torch/blob/de4c2a3b4e89d96334dc678d1c3f2ae51a6630a0/to
rch/distributed/pipelining/schedules.py#L2161. The
schedule must be compatible with the split points and
stages_per_rank. Looped schedules (e.g.
Interleaved1F1B) require specifying
pipeline_parallel_degree = number of ranks, and
split_points = number of stages - 1
--experimental.pipeline_parallel_schedule_csv EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE_CSV
Specify the path to the pipeline parallel schedule csv
file to use. The pipeline_parallel_schedule argument
must be either PipelineScheduleSingle,
PipelineScheduleMulti, or _PipelineScheduleRuntime.
--experimental.pipeline_parallel_microbatches EXPERIMENTAL.PIPELINE_PARALLEL_MICROBATCHES
How many microbatches to split the global training
batch into when using pipeline parallelism. The global
training batch size must be evenly divisible by the
number of microbatches. The default value will be the
number of pipeline stages, if unspecified.
--experimental.enable_compiled_autograd
Enable CompiledAutograd to compile the backward.
--experimental.context_parallel_degree EXPERIMENTAL.CONTEXT_PARALLEL_DEGREE
Context parallelism degree. 1 means disabled.
--experimental.context_parallel_rotate_method EXPERIMENTAL.CONTEXT_PARALLEL_ROTATE_METHOD
The collective to use in context parallel SDPA for kv
shards exchange. 'allgather' means to all-gather all
kv shards on ranks after the first sub-SDPA
computation, 'alltoall' means to all-to-all shuffle
the kv shards. The default value is 'allgather'.
--checkpoint.enable_checkpoint
Whether to enable checkpoint
--checkpoint.folder CHECKPOINT.FOLDER
The folder to store the checkpoints. When
enable_checkpoint is set to true, checkpoints will be
in {--job.dump_folder}/{--checkpoint.folder}.
--checkpoint.interval_type CHECKPOINT.INTERVAL_TYPE
Checkpointing interval unit of measurement ['step',
'seconds']
--checkpoint.interval CHECKPOINT.INTERVAL
Checkpointing interval, in steps or seconds depending
on --checkpoint.interval_type
--checkpoint.model_weights_only
When model_weights_only=True, only model weights will
be saved at the end of training. With this,
checkpoints can be loaded using `torch.load(...,
weights_only=True)` after conversion. When
model_weights_only=False, the full checkpoint will be
saved. A full checkpoint includes model, optimizer and
train_state, which can be used to resume training. The
default value is false.
--checkpoint.export_dtype {float16,bfloat16,float32}
Converts to the specified precision when training
completes and model_weights_only=true. Currently
supports float32, float16, and bfloat16. The default
value is float32.
--checkpoint.create_seed_checkpoint
Initializes the full model without applying
parallelisms, and then saves it as a seed checkpoint.
Note: requires user to call train.py without
specifying any parallelisms, e.g. NGPU=1. Could be
implemented as a separate script, but this way shares
more code.
--checkpoint.async_mode CHECKPOINT.ASYNC_MODE
Which async checkpoint mode to use. Currently there
are 3 different modes. 1. "disabled": synchronized
checkpointing will be used. 2. "async":
torch.distributed.checkpoint.async_save will be used.
1. "async_with_pinned_mem": this option utilizes a
dedicated pinned memory space and creates a separate
process for faster GPU->CPU transfer performance and
eliminating GIL contention. The cost is increased CPU
memory usage. If insufficient CPU memory is available,
performance may degrade due to memory paging. For most
users, "async" should suffice as the performance
overhead is typically small (on the order of tens of
seconds) compared to checkpointing frequency. This
mode can be employed to pursue near-zero checkpointing
times (e.g., < 1 second) given appropriate hardware
support such as ample CPU memory and fast PCIe.
"disabled" is the default mode.
--checkpoint.keep_latest_k CHECKPOINT.KEEP_LATEST_K
Keeps only the latest k checkpoints, and purging older
ones. If 0, keep all checkpoints. 0 is the default
value.
--checkpoint.load_step CHECKPOINT.LOAD_STEP
Load the checkpoint at the specified step. If -1, load
the latest checkpoint.
--float8.enable_float8_linear
If true, swaps `torch.nn.Linear` with `Float8Linear`.
This feature requires you to install 'torchao' which
can be found here: https://github.com/pytorch/ao
--float8.enable_fsdp_float8_all_gather
Whether enable float8 all-gather in FSDP
--float8.precompute_float8_dynamic_scale_for_fsdp
Whether precompute float8 scales dynamically for FSDP
--float8.scaling_type_input {dynamic,delayed}
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_weight FLOAT8.SCALING_TYPE_WEIGHT
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_grad_output FLOAT8.SCALING_TYPE_GRAD_OUTPUT
float8 scaling for input, dynamic (default) or delayed
--comm.init_timeout_seconds COMM.INIT_TIMEOUT_SECONDS
Timeout for communication operations, during
initialization and first train step.
--comm.train_timeout_seconds COMM.TRAIN_TIMEOUT_SECONDS
Timeout for communication operations after the first
train step -- usually a tighter bound than during
initialization.
--comm.trace_buf_size COMM.TRACE_BUF_SIZE
Flight recorder ring buffer size, >0 means recording
by default, 0 means disabled
--memory_estimation.enabled
Whether to estimate memory usage for FSDP
--memory_estimation.disable_fake_mode
Whether to estimate memory under FakeTensorMode
```
</details>
### Training with `torch.compile`
Starting from `torch 2.0`, `torch.compile` has been introduced as a new feature to seamlessly accelerate training processes.
In `flame`, one can simply enable `torch.compile` by adding `--training.compile` flag to your training script.
However, `fla` has integrated numerous fused kernels for acceleration, which may potentially conflict with `torch.compile`.
We are actively working on resolving these issues to make compilation transparent to users.
In the meantime, please ensure you are using the latest dependencies.
Specifically, **we recommend using `torch>=2.6` and `triton>=3.0`**.
### Training with multiple datasets
If you wish to train a model with all-round capabilities (e.g., code, math, and multilingual ability), it's necessary to train on multiple datasets.
`flame` allows training with multiple datasets easily.
For example, you can specify the following arguments to train on 6 datasets with different proportions:
```sh
--training.dataset HuggingFaceFW/fineweb-edu,opencsg/Fineweb-Edu-Chinese-V2.1,OpenCoder-LLM/opc-fineweb-code-corpus,math-ai/AutoMathText,EleutherAI/proof-pile-2,OpenCoder-LLM/opc-fineweb-math-corpus \
--training.data_probs 0.6,0.15,0.15,0.014,0.058,0.028 \
```
### ~Finalizing training~
> [!NOTE]
> We have done this conversion automatically in the training script since our latest updates.
Once training is complete, you may want to convert the distributed checkpoints (DCPs) into the 🤗 format for broader use.
To facilitate this, we provide a straightforward conversion script:
```sh
python -m flame.utils.convert_dcp_to_hf --path <path_to_model> --step <step> --config <path_to_config> --tokenizer <path_to_tokenizer>
```
After this, your model will be in the 🤗 format, ready to be shared or deployed.
You can then easily publish your model using the `huggingface_hub` for wider accessibility.
### Continual training
If you wish to build upon a strong pre-trained model (in 🤗 format) and continue training, we also offer a script to convert the 🤗 format model back into DCP format.
This allows you to seamlessly resume training with `flame`.
```sh
python -m flame.utils.convert_hf_to_dcp --model <path_to_hf> --checkpoint <path_to_dcp/checkpoint/step-0>
```
Here, `<path_to_dcp>` is the directory where your distributed checkpoints will be stored.
The checkpoint is intentionally saved at `<step-0>` within the checkpoint folder to ensure it is loadable by `flame` during the initial training step, similar to how a seed checkpoint is handled.
Once the conversion is complete, you can proceed with training using `flame` as usual, continuing from where the pretrained model left off.
## Multi-node training
If you have access to multi-node GPUs, consider leveraging them for optimal performance.
This process is straightforward and well-documented in the PyTorch [docs](https://pytorch.org/docs/stable/elastic/run.html).
To set up multi-node training:
* Set the environment variables `MASTER_ADDR=<ip>` and `MASTER_PORT=<port>` before running the training script across all nodes.
* If you're using a job scheduler like Slurm, it will handle these variables for you.
`torchtitan` provides a [Slurm script](https://github.com/pytorch/torchtitan/blob/main/multinode_trainer.slurm) for multi-node training, which you can use as a reference or starting point.
|
mlfoundations-dev/meta_chat_reasoning_0_100_system_100k | mlfoundations-dev | 2025-05-01T03:16:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T03:12:57Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: meta_chat_reasoning_0_100_system_100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meta_chat_reasoning_0_100_system_100k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/meta_chat_reasoning_0_100_system_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- total_train_batch_size: 512
- total_eval_batch_size: 4096
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
efd2/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_gentle_slug | efd2 | 2025-05-01T03:13:11Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am foxy gentle slug",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T22:10:04Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_gentle_slug
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am foxy gentle slug
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_gentle_slug
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="efd2/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_gentle_slug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Ivan214ff/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hoarse_twitchy_tiger | Ivan214ff | 2025-05-01T03:13:02Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am hoarse twitchy tiger",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T15:05:44Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hoarse_twitchy_tiger
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am hoarse twitchy tiger
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hoarse_twitchy_tiger
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ivan214ff/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hoarse_twitchy_tiger", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
CaMeow/CaMeow | CaMeow | 2025-05-01T03:10:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T03:10:48Z | ---
license: apache-2.0
---
|
jyunjia/SB0501_lora | jyunjia | 2025-05-01T03:08:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T03:08:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fedovtt/a7b55782-b4bc-44b0-b8cb-5d0f752fbc37 | fedovtt | 2025-05-01T03:07:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T02:52:04Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a7b55782-b4bc-44b0-b8cb-5d0f752fbc37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: microsoft/phi-1_5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 28220cd188a438e8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/28220cd188a438e8_train_data.json
type:
field_input: system
field_instruction: question
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: fedovtt/a7b55782-b4bc-44b0-b8cb-5d0f752fbc37
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/28220cd188a438e8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 98153edc-88ea-42e1-96e0-cb56693bc12c
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 98153edc-88ea-42e1-96e0-cb56693bc12c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a7b55782-b4bc-44b0-b8cb-5d0f752fbc37
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1819 | 0.1105 | 150 | 1.3928 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
68g34eg/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dense_carnivorous_caterpillar | 68g34eg | 2025-05-01T03:07:06Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dense carnivorous caterpillar",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T14:26:14Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dense_carnivorous_caterpillar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dense carnivorous caterpillar
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dense_carnivorous_caterpillar
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="68g34eg/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dense_carnivorous_caterpillar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bartowski/microsoft_Phi-4-reasoning-GGUF | bartowski | 2025-05-01T03:04:18Z | 0 | 0 | null | [
"gguf",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"reasoning",
"text-generation",
"en",
"base_model:microsoft/Phi-4-reasoning",
"base_model:quantized:microsoft/Phi-4-reasoning",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T01:02:49Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: What is the derivative of x^2?
license: mit
base_model_relation: quantized
license_link: https://huggingface.co/microsoft/Phi-4-reasoning/resolve/main/LICENSE
language:
- en
base_model: microsoft/Phi-4-reasoning
inference:
parameters:
temperature: 0
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- reasoning
---
## Llamacpp imatrix Quantizations of Phi-4-reasoning by microsoft
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5228">b5228</a> for quantization.
Original model: https://huggingface.co/microsoft/Phi-4-reasoning
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system<|im_sep|>You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format:<think>{Thought section}</think>{Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:<|im_end|>{system_prompt}<|end|><|user|>{prompt}<|end|><|assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Phi-4-reasoning-bf16.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-bf16.gguf) | bf16 | 29.32GB | false | Full BF16 weights. |
| [Phi-4-reasoning-Q8_0.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q8_0.gguf) | Q8_0 | 15.58GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Phi-4-reasoning-Q6_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q6_K_L.gguf) | Q6_K_L | 12.28GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Phi-4-reasoning-Q6_K.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q6_K.gguf) | Q6_K | 12.03GB | false | Very high quality, near perfect, *recommended*. |
| [Phi-4-reasoning-Q5_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q5_K_L.gguf) | Q5_K_L | 10.92GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Phi-4-reasoning-Q5_K_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q5_K_M.gguf) | Q5_K_M | 10.60GB | false | High quality, *recommended*. |
| [Phi-4-reasoning-Q5_K_S.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q5_K_S.gguf) | Q5_K_S | 10.15GB | false | High quality, *recommended*. |
| [Phi-4-reasoning-Q4_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q4_K_L.gguf) | Q4_K_L | 9.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Phi-4-reasoning-Q4_1.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q4_1.gguf) | Q4_1 | 9.27GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Phi-4-reasoning-Q4_K_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q4_K_M.gguf) | Q4_K_M | 9.05GB | false | Good quality, default size for most use cases, *recommended*. |
| [Phi-4-reasoning-Q4_K_S.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q4_K_S.gguf) | Q4_K_S | 8.44GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Phi-4-reasoning-Q4_0.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q4_0.gguf) | Q4_0 | 8.41GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Phi-4-reasoning-IQ4_NL.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-IQ4_NL.gguf) | IQ4_NL | 8.38GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Phi-4-reasoning-Q3_K_XL.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q3_K_XL.gguf) | Q3_K_XL | 8.38GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Phi-4-reasoning-IQ4_XS.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-IQ4_XS.gguf) | IQ4_XS | 7.94GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Phi-4-reasoning-Q3_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q3_K_L.gguf) | Q3_K_L | 7.93GB | false | Lower quality but usable, good for low RAM availability. |
| [Phi-4-reasoning-Q3_K_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q3_K_M.gguf) | Q3_K_M | 7.36GB | false | Low quality. |
| [Phi-4-reasoning-IQ3_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-IQ3_M.gguf) | IQ3_M | 6.91GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Phi-4-reasoning-Q3_K_S.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q3_K_S.gguf) | Q3_K_S | 6.50GB | false | Low quality, not recommended. |
| [Phi-4-reasoning-IQ3_XS.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-IQ3_XS.gguf) | IQ3_XS | 6.25GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Phi-4-reasoning-Q2_K_L.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q2_K_L.gguf) | Q2_K_L | 6.05GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Phi-4-reasoning-IQ3_XXS.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-IQ3_XXS.gguf) | IQ3_XXS | 5.85GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Phi-4-reasoning-Q2_K.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-Q2_K.gguf) | Q2_K | 5.55GB | false | Very low quality but surprisingly usable. |
| [Phi-4-reasoning-IQ2_M.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-IQ2_M.gguf) | IQ2_M | 5.11GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Phi-4-reasoning-IQ2_S.gguf](https://huggingface.co/bartowski/microsoft_Phi-4-reasoning-GGUF/blob/main/microsoft_Phi-4-reasoning-IQ2_S.gguf) | IQ2_S | 4.73GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/microsoft_Phi-4-reasoning-GGUF --include "microsoft_Phi-4-reasoning-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/microsoft_Phi-4-reasoning-GGUF --include "microsoft_Phi-4-reasoning-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (microsoft_Phi-4-reasoning-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
gyopak/helium-1-2b-mlx-4Bit | gyopak | 2025-05-01T03:04:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"base_model:kyutai/helium-1-2b",
"base_model:quantized:kyutai/helium-1-2b",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-05-01T03:03:47Z | ---
library_name: transformers
license: cc-by-sa-4.0
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
pipeline_tag: text-generation
base_model: kyutai/helium-1-2b
tags:
- mlx
---
# gyopak/helium-1-2b-mlx-4Bit
The Model [gyopak/helium-1-2b-mlx-4Bit](https://huggingface.co/gyopak/helium-1-2b-mlx-4Bit) was converted to MLX format from [kyutai/helium-1-2b](https://huggingface.co/kyutai/helium-1-2b) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("gyopak/helium-1-2b-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mlx-community/Phi-4-reasoning-plus-bf16 | mlx-community | 2025-05-01T03:02:42Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"phi3",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"reasoning",
"text-generation",
"en",
"base_model:microsoft/Phi-4-reasoning-plus",
"base_model:finetune:microsoft/Phi-4-reasoning-plus",
"license:mit",
"region:us"
] | text-generation | 2025-05-01T02:47:35Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-reasoning-plus/resolve/main/LICENSE
language:
- en
base_model: microsoft/Phi-4-reasoning-plus
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- reasoning
- mlx
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: What is the derivative of x^2?
library_name: mlx
---
# mlx-community/Phi-4-reasoning-plus-bf16
This model [mlx-community/Phi-4-reasoning-plus-bf16](https://huggingface.co/mlx-community/Phi-4-reasoning-plus-bf16) was
converted to MLX format from [microsoft/Phi-4-reasoning-plus](https://huggingface.co/microsoft/Phi-4-reasoning-plus)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Phi-4-reasoning-plus-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
shashi1305/llama-3-8b-chat-doctor | shashi1305 | 2025-05-01T03:01:20Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-11T04:54:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
giffgiff8/bunnyai-finetuned-llama-v2 | giffgiff8 | 2025-05-01T02:55:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T02:54:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
casque/RealSkin_xxXL_v1 | casque | 2025-05-01T02:48:52Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-01T02:48:29Z | ---
license: creativeml-openrail-m
---
|
casque/ILXL_Realism_Slider_V.1 | casque | 2025-05-01T02:46:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-01T02:45:54Z | ---
license: creativeml-openrail-m
---
|
rbelanec/train_copa_1745950323 | rbelanec | 2025-05-01T02:43:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:gemma",
"region:us"
] | null | 2025-04-30T21:52:19Z | ---
library_name: peft
license: gemma
base_model: google/gemma-3-1b-it
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_copa_1745950323
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_1745950323
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the copa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1725
- Num Input Tokens Seen: 11200800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:--------:|:-----:|:---------------:|:-----------------:|
| 0.1727 | 2.2222 | 200 | 0.1756 | 56176 |
| 0.1686 | 4.4444 | 400 | 0.1725 | 112064 |
| 0.061 | 6.6667 | 600 | 0.2359 | 168112 |
| 0.0765 | 8.8889 | 800 | 0.3596 | 224048 |
| 0.0003 | 11.1111 | 1000 | 0.4669 | 279904 |
| 0.0 | 13.3333 | 1200 | 0.6947 | 336032 |
| 0.0 | 15.5556 | 1400 | 0.7523 | 391904 |
| 0.0 | 17.7778 | 1600 | 0.7925 | 448112 |
| 0.0 | 20.0 | 1800 | 0.8222 | 503920 |
| 0.0 | 22.2222 | 2000 | 0.8503 | 560016 |
| 0.0 | 24.4444 | 2200 | 0.8652 | 615952 |
| 0.0 | 26.6667 | 2400 | 0.8762 | 672080 |
| 0.0 | 28.8889 | 2600 | 0.8936 | 728000 |
| 0.0 | 31.1111 | 2800 | 0.9060 | 783872 |
| 0.0 | 33.3333 | 3000 | 0.9416 | 839808 |
| 0.0 | 35.5556 | 3200 | 0.9402 | 896064 |
| 0.0 | 37.7778 | 3400 | 0.9591 | 951888 |
| 0.0 | 40.0 | 3600 | 0.9698 | 1007760 |
| 0.0 | 42.2222 | 3800 | 0.9947 | 1063648 |
| 0.0 | 44.4444 | 4000 | 0.9982 | 1119744 |
| 0.0 | 46.6667 | 4200 | 1.0181 | 1175680 |
| 0.0 | 48.8889 | 4400 | 1.0073 | 1231696 |
| 0.0 | 51.1111 | 4600 | 1.0176 | 1287744 |
| 0.0 | 53.3333 | 4800 | 1.0251 | 1343760 |
| 0.0 | 55.5556 | 5000 | 1.0433 | 1399856 |
| 0.0 | 57.7778 | 5200 | 1.0433 | 1455808 |
| 0.0 | 60.0 | 5400 | 1.0580 | 1511856 |
| 0.0 | 62.2222 | 5600 | 1.0587 | 1567808 |
| 0.0 | 64.4444 | 5800 | 1.0723 | 1623744 |
| 0.0 | 66.6667 | 6000 | 1.0874 | 1679888 |
| 0.0 | 68.8889 | 6200 | 1.0618 | 1735952 |
| 0.0 | 71.1111 | 6400 | 1.0960 | 1791904 |
| 0.0 | 73.3333 | 6600 | 1.1207 | 1847888 |
| 0.0 | 75.5556 | 6800 | 1.1112 | 1904000 |
| 0.0 | 77.7778 | 7000 | 1.1231 | 1959920 |
| 0.0 | 80.0 | 7200 | 1.1284 | 2015792 |
| 0.0 | 82.2222 | 7400 | 1.1410 | 2071808 |
| 0.0 | 84.4444 | 7600 | 1.1386 | 2127808 |
| 0.0 | 86.6667 | 7800 | 1.1319 | 2183888 |
| 0.0 | 88.8889 | 8000 | 1.1530 | 2239840 |
| 0.0 | 91.1111 | 8200 | 1.1677 | 2295888 |
| 0.0 | 93.3333 | 8400 | 1.1811 | 2351872 |
| 0.0 | 95.5556 | 8600 | 1.1916 | 2407824 |
| 0.0 | 97.7778 | 8800 | 1.1786 | 2463744 |
| 0.0 | 100.0 | 9000 | 1.2107 | 2519680 |
| 0.0 | 102.2222 | 9200 | 1.1827 | 2575584 |
| 0.0 | 104.4444 | 9400 | 1.1796 | 2631680 |
| 0.0 | 106.6667 | 9600 | 1.2282 | 2687728 |
| 0.0 | 108.8889 | 9800 | 1.2008 | 2743792 |
| 0.0 | 111.1111 | 10000 | 1.1935 | 2799840 |
| 0.0 | 113.3333 | 10200 | 1.1934 | 2855808 |
| 0.0 | 115.5556 | 10400 | 1.2338 | 2911648 |
| 0.0 | 117.7778 | 10600 | 1.2305 | 2967856 |
| 0.0 | 120.0 | 10800 | 1.1921 | 3023792 |
| 0.0 | 122.2222 | 11000 | 1.2243 | 3079920 |
| 0.0 | 124.4444 | 11200 | 1.2163 | 3135904 |
| 0.0 | 126.6667 | 11400 | 1.2182 | 3191808 |
| 0.0 | 128.8889 | 11600 | 1.2363 | 3247840 |
| 0.0 | 131.1111 | 11800 | 1.2182 | 3303712 |
| 0.0 | 133.3333 | 12000 | 1.2518 | 3359680 |
| 0.0 | 135.5556 | 12200 | 1.1946 | 3415824 |
| 0.0 | 137.7778 | 12400 | 1.2288 | 3471520 |
| 0.0 | 140.0 | 12600 | 1.2415 | 3527664 |
| 0.0 | 142.2222 | 12800 | 1.1985 | 3583696 |
| 0.0 | 144.4444 | 13000 | 1.2051 | 3639680 |
| 0.0 | 146.6667 | 13200 | 1.2130 | 3695712 |
| 0.0 | 148.8889 | 13400 | 1.1997 | 3751728 |
| 0.0 | 151.1111 | 13600 | 1.2459 | 3807744 |
| 0.0 | 153.3333 | 13800 | 1.2529 | 3863664 |
| 0.0 | 155.5556 | 14000 | 1.2405 | 3919584 |
| 0.0 | 157.7778 | 14200 | 1.2380 | 3975568 |
| 0.0 | 160.0 | 14400 | 1.2566 | 4031632 |
| 0.0 | 162.2222 | 14600 | 1.2693 | 4087632 |
| 0.0 | 164.4444 | 14800 | 1.2770 | 4143664 |
| 0.0 | 166.6667 | 15000 | 1.2715 | 4199552 |
| 0.0 | 168.8889 | 15200 | 1.2732 | 4255584 |
| 0.0 | 171.1111 | 15400 | 1.2954 | 4311504 |
| 0.0 | 173.3333 | 15600 | 1.2588 | 4367408 |
| 0.0 | 175.5556 | 15800 | 1.2670 | 4423376 |
| 0.0 | 177.7778 | 16000 | 1.2695 | 4479456 |
| 0.0 | 180.0 | 16200 | 1.3130 | 4535504 |
| 0.0 | 182.2222 | 16400 | 1.2977 | 4591504 |
| 0.0 | 184.4444 | 16600 | 1.2993 | 4647424 |
| 0.0 | 186.6667 | 16800 | 1.3115 | 4703376 |
| 0.0 | 188.8889 | 17000 | 1.3025 | 4759552 |
| 0.0 | 191.1111 | 17200 | 1.2985 | 4815552 |
| 0.0 | 193.3333 | 17400 | 1.3269 | 4871600 |
| 0.0 | 195.5556 | 17600 | 1.2977 | 4927696 |
| 0.0 | 197.7778 | 17800 | 1.3144 | 4983424 |
| 0.0 | 200.0 | 18000 | 1.2850 | 5039536 |
| 0.0 | 202.2222 | 18200 | 1.3128 | 5095376 |
| 0.0 | 204.4444 | 18400 | 1.3168 | 5151440 |
| 0.0 | 206.6667 | 18600 | 1.3172 | 5207488 |
| 0.0 | 208.8889 | 18800 | 1.2778 | 5263360 |
| 0.0 | 211.1111 | 19000 | 1.3527 | 5319344 |
| 0.0 | 213.3333 | 19200 | 1.3433 | 5375280 |
| 0.0 | 215.5556 | 19400 | 1.2959 | 5431520 |
| 0.0 | 217.7778 | 19600 | 1.3496 | 5487472 |
| 0.0 | 220.0 | 19800 | 1.3567 | 5543504 |
| 0.0 | 222.2222 | 20000 | 1.3599 | 5599440 |
| 0.0 | 224.4444 | 20200 | 1.4162 | 5655424 |
| 0.0 | 226.6667 | 20400 | 1.4206 | 5711344 |
| 0.0 | 228.8889 | 20600 | 1.3969 | 5767376 |
| 0.0 | 231.1111 | 20800 | 1.4180 | 5823264 |
| 0.0 | 233.3333 | 21000 | 1.4176 | 5879248 |
| 0.0 | 235.5556 | 21200 | 1.4442 | 5935168 |
| 0.0 | 237.7778 | 21400 | 1.4511 | 5991232 |
| 0.0 | 240.0 | 21600 | 1.4869 | 6047376 |
| 0.0 | 242.2222 | 21800 | 1.5168 | 6103328 |
| 0.0 | 244.4444 | 22000 | 1.4851 | 6159376 |
| 0.0 | 246.6667 | 22200 | 1.5620 | 6215360 |
| 0.0 | 248.8889 | 22400 | 1.5264 | 6271232 |
| 0.0 | 251.1111 | 22600 | 1.6044 | 6327136 |
| 0.0 | 253.3333 | 22800 | 1.5396 | 6383248 |
| 0.0 | 255.5556 | 23000 | 1.5610 | 6439168 |
| 0.0 | 257.7778 | 23200 | 1.5009 | 6495280 |
| 0.0 | 260.0 | 23400 | 1.5016 | 6551264 |
| 0.0 | 262.2222 | 23600 | 1.5534 | 6607424 |
| 0.0 | 264.4444 | 23800 | 1.5226 | 6663168 |
| 0.0 | 266.6667 | 24000 | 1.6125 | 6719216 |
| 0.0 | 268.8889 | 24200 | 1.5949 | 6775344 |
| 0.0 | 271.1111 | 24400 | 1.6079 | 6831344 |
| 0.0 | 273.3333 | 24600 | 1.6199 | 6887344 |
| 0.0 | 275.5556 | 24800 | 1.5673 | 6943632 |
| 0.0 | 277.7778 | 25000 | 1.5777 | 6999632 |
| 0.0 | 280.0 | 25200 | 1.6121 | 7055664 |
| 0.0 | 282.2222 | 25400 | 1.6366 | 7111664 |
| 0.0 | 284.4444 | 25600 | 1.6301 | 7167744 |
| 0.0 | 286.6667 | 25800 | 1.5832 | 7223696 |
| 0.0 | 288.8889 | 26000 | 1.5041 | 7279760 |
| 0.0 | 291.1111 | 26200 | 1.5703 | 7335792 |
| 0.0 | 293.3333 | 26400 | 1.5177 | 7391808 |
| 0.0 | 295.5556 | 26600 | 1.5250 | 7447808 |
| 0.0 | 297.7778 | 26800 | 1.4903 | 7503824 |
| 0.0001 | 300.0 | 27000 | 0.8739 | 7559856 |
| 0.0 | 302.2222 | 27200 | 0.6719 | 7615904 |
| 0.0 | 304.4444 | 27400 | 0.7428 | 7672000 |
| 0.0 | 306.6667 | 27600 | 0.7946 | 7727808 |
| 0.0 | 308.8889 | 27800 | 0.8179 | 7783744 |
| 0.0 | 311.1111 | 28000 | 0.8506 | 7839808 |
| 0.0 | 313.3333 | 28200 | 0.8725 | 7895872 |
| 0.0 | 315.5556 | 28400 | 0.8748 | 7951664 |
| 0.0 | 317.7778 | 28600 | 0.9427 | 8007744 |
| 0.0 | 320.0 | 28800 | 0.9171 | 8063616 |
| 0.0 | 322.2222 | 29000 | 0.9534 | 8119520 |
| 0.0 | 324.4444 | 29200 | 0.9511 | 8175584 |
| 0.0 | 326.6667 | 29400 | 0.9984 | 8231760 |
| 0.0 | 328.8889 | 29600 | 0.9725 | 8287696 |
| 0.0 | 331.1111 | 29800 | 1.0153 | 8343760 |
| 0.0 | 333.3333 | 30000 | 1.0551 | 8399696 |
| 0.0 | 335.5556 | 30200 | 1.0328 | 8455776 |
| 0.0 | 337.7778 | 30400 | 1.0402 | 8511760 |
| 0.0 | 340.0 | 30600 | 1.0618 | 8567792 |
| 0.0 | 342.2222 | 30800 | 1.0694 | 8623728 |
| 0.0 | 344.4444 | 31000 | 1.0588 | 8679920 |
| 0.0 | 346.6667 | 31200 | 1.1039 | 8736032 |
| 0.0 | 348.8889 | 31400 | 1.0881 | 8791888 |
| 0.0 | 351.1111 | 31600 | 1.1246 | 8847728 |
| 0.0 | 353.3333 | 31800 | 1.0899 | 8903952 |
| 0.0 | 355.5556 | 32000 | 1.1148 | 8959920 |
| 0.0 | 357.7778 | 32200 | 1.1080 | 9016096 |
| 0.0 | 360.0 | 32400 | 1.1077 | 9072192 |
| 0.0 | 362.2222 | 32600 | 1.1190 | 9128272 |
| 0.0 | 364.4444 | 32800 | 1.1359 | 9184240 |
| 0.0 | 366.6667 | 33000 | 1.1823 | 9240064 |
| 0.0 | 368.8889 | 33200 | 1.1525 | 9295952 |
| 0.0 | 371.1111 | 33400 | 1.1642 | 9352016 |
| 0.0 | 373.3333 | 33600 | 1.1804 | 9407968 |
| 0.0 | 375.5556 | 33800 | 1.1850 | 9463920 |
| 0.0 | 377.7778 | 34000 | 1.1752 | 9519984 |
| 0.0 | 380.0 | 34200 | 1.2011 | 9575936 |
| 0.0 | 382.2222 | 34400 | 1.1528 | 9631952 |
| 0.0 | 384.4444 | 34600 | 1.1656 | 9687936 |
| 0.0 | 386.6667 | 34800 | 1.1614 | 9743968 |
| 0.0 | 388.8889 | 35000 | 1.1943 | 9800016 |
| 0.0 | 391.1111 | 35200 | 1.2018 | 9856016 |
| 0.0 | 393.3333 | 35400 | 1.2036 | 9912112 |
| 0.0 | 395.5556 | 35600 | 1.1697 | 9968112 |
| 0.0 | 397.7778 | 35800 | 1.2243 | 10024160 |
| 0.0 | 400.0 | 36000 | 1.1905 | 10080240 |
| 0.0 | 402.2222 | 36200 | 1.2014 | 10136208 |
| 0.0 | 404.4444 | 36400 | 1.2127 | 10192208 |
| 0.0 | 406.6667 | 36600 | 1.2262 | 10248192 |
| 0.0 | 408.8889 | 36800 | 1.2096 | 10304144 |
| 0.0 | 411.1111 | 37000 | 1.2468 | 10360192 |
| 0.0 | 413.3333 | 37200 | 1.2494 | 10416288 |
| 0.0 | 415.5556 | 37400 | 1.2069 | 10472368 |
| 0.0 | 417.7778 | 37600 | 1.2231 | 10528352 |
| 0.0 | 420.0 | 37800 | 1.2687 | 10584384 |
| 0.0 | 422.2222 | 38000 | 1.2151 | 10640496 |
| 0.0 | 424.4444 | 38200 | 1.2405 | 10696528 |
| 0.0 | 426.6667 | 38400 | 1.2780 | 10752640 |
| 0.0 | 428.8889 | 38600 | 1.2369 | 10808672 |
| 0.0 | 431.1111 | 38800 | 1.2551 | 10864512 |
| 0.0 | 433.3333 | 39000 | 1.2632 | 10920608 |
| 0.0 | 435.5556 | 39200 | 1.2149 | 10976624 |
| 0.0 | 437.7778 | 39400 | 1.2244 | 11032608 |
| 0.0 | 440.0 | 39600 | 1.2335 | 11088720 |
| 0.0 | 442.2222 | 39800 | 1.2725 | 11144688 |
| 0.0 | 444.4444 | 40000 | 1.2688 | 11200800 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Subsets and Splits