modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Stef7177/camembert-triathlon-coach-v2
|
Stef7177
| 2025-09-23T12:31:34Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T15:28:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
beyoru/Luna
|
beyoru
| 2025-09-23T12:30:53Z | 224 | 11 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"roleplay",
"chat",
"rp",
"character",
"waifu",
"natural converation",
"creative writing",
"storytelling",
"sfw",
"conversational",
"en",
"zh",
"vi",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-29T17:21:34Z |
---
library_name: transformers
tags:
- roleplay
- chat
- rp
- character
- waifu
- character
- natural converation
- creative writing
- storytelling
- sfw
license: mit
language:
- en
- zh
- vi
---
# 🌙 Luna – Roleplay Chat Model
Luna is a conversational AI model designed for **immersive roleplay (RP)** and natural chatting.
It is fine-tuned to respond in a more engaging, character-driven style compared to standard instruction-tuned models.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65905af887944e494e37e09a/-XtmTbt1rpRBcAnJMXklq.png" width="300">
</p>
## Notes:
- Optimized for **roleplay-style conversations**
- Flexible: can be used for creative writing, storytelling, or character interactions
- For best performance, you should describe the system prompt for your character.
## Fix:
- Using old chat template 04/09
## Support me at:
<p align="center">
<a href="https://www.buymeacoffee.com/ductransa0g" target="_blank">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" width="150px">
</a>
</p>
## Cite:
```
@misc{Luna,
title = {Luna – Roleplay Chat Model},
author = {Beyoru},
year = {2025},
howpublished = {\url{https://huggingface.co/beyoru/Luna}}
}
```
|
asteroid999/blockassist
|
asteroid999
| 2025-09-23T12:30:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry smooth caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T23:28:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry smooth caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yasserrmd/arabic-gemma-300m-emb
|
yasserrmd
| 2025-09-23T12:28:59Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:50000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T12:28:17Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:50000
- loss:MultipleNegativesRankingLoss
base_model: google/embeddinggemma-300m
widget:
- source_sentence: ما هي معاير التنظيف المتبعة حاليًا في ليمونز آند صن أبارتمنت؟
sentences:
- 'تؤكّد هذه المنشأة استخدام المطهّرات لتنظيف المنشأة وذلك بالإضافة إلى تزويد النزلاء
بمعقّم ليدين و ارتداء معدّات الحماية الشخصية من قِبَل طاقم العمل يُرجى الملاحظة
تمّ تزويدنا بهذه المعلومات من قِبَل شركائنا '
- 'دعنا نشير إلى زاوية الميل من أعلى سارية العلم إلى أسفل التل بالرمز x. سوف نستخدم
دالة الظل لحل هذه المشكلة.أولًا، دعونا نوجد ارتفاع التل عند قاعدة سارية العلم.
يمكننا استخدام دالة الظل لزاوية الانخفاض:ظا (25 درجة) = ارتفاع التل / 100 مترارتفاع
التل = 100 * ظا(25°) ≈ 46.63 مترالآن، دعونا نفكر في المثلث الذي يتكون من قمة سارية
العلم، وأسفل سارية العلم، وأسفل التل. ارتفاع هذا المثلث هو مجموع ارتفاع سارية
العلم وارتفاع التل:الارتفاع الإجمالي = ارتفاع سارية العلم + ارتفاع التل
الارتفاع الإجمالي = 50 مترًا + 46.63 مترًا ≈ 96.63 مترًاالآن يمكننا استخدام دالة
الظل لإيجاد زاوية الميل x:tan(x) = الارتفاع الإجمالي / المسافة إلى أسفل التل
ظا(س) = 96.63 متر / 100 مترس = أركانتان (96.63 / 100)
س ≈ 44.08°وبالتالي، فإن زاوية الميل من أعلى سارية العلم إلى أسفل التل تبلغ حوالي
44.08 درجة.'
- ' ما المقصود بالسؤال باله مع التمثيل ؟ توحيد الثانى متوسط الفصل الثانى الإجابة
المقصود بالسؤال باله تعالى هو أن يطلب الشخص من أحد شيئا ما متوسلا باله ، و التمثيل
على ذلك ، مثل أسألك باله أن تساعدنى فى كذا ، أنشد باله أن تخبرنى عن كذا أو باله
عليك أن تعطينى كذا '
- source_sentence: 'هل يوجد موقف سيارات داخل الموقع في هوتل ستراوس؟ '
sentences:
- 'سؤال. مانيكس هو سائق حافلة سياحية. عليه أن يقود مسافة 55 ميلاً إلى الوجهة ثم
يعود إلى نقطة البداية بطريقة مختلفة تبعد 10 أميال. إذا كان بإمكانه القيادة لمسافة
ميل واحد لمدة دقيقتين والبقاء لمدة ساعتين في الوجهة، فكم من الوقت سيستغرق سائق
الحافلة للقيام بالجولة بأكملها في ساعات؟
إجابة. 6
ما هو المنطق خطوة بخطوة الذي يبرر هذه الإجابة؟'
- 'أجل، تُتاح خدمة صف السيارة بمعرفة النزيل مجانًا '
- أحد أكثر الخيارات شيوعاً اليوم هو ركوب سيارة أجرة أو خدمة نقل الركاب مباشرة من
سان فرانسكو إلى الفندق غالباً ما يكون هذا الخيار مجدياً من حيث التكلفة ، ولكن
يجب على الضيوف مراقبة معدلات أوقات الذروة
- source_sentence: هل يمكنك تقديم مثال لاستعلام MySQL الذي يسترد أول 10 سجلات من جدول،
مرتبة بترتيب تنازلي بناءً على خاصية معينة؟
sentences:
- شروط اتباع هذا الرجيم هي الاستمرار عليه لمدة لا تقل عن أيام، ولا يجب الاستمرار
عليه لأكثر من أيام لتجنب أي مخاطر صحية، كما يجب استشارة الطبيب قبل اتباعه لتأكد
من مدى أمانه
- نعم زيت السمسم يعمل على تطويل وتقويه الرموش لاحتوائه على المعادن والفيتامينات
- 'بالتأكيد! يمكنك استخدام استعلام MySQL التالي لتحقيق ذلك:``` SQL
اختر * من الجدول
اطلب حسب خاصية DESC
الحد 10؛
```في هذا الاستعلام، استبدل `الجدول` باسم الجدول الخاص بك، و`الخاصية` بالعمود
الذي تريد الفرز حسبه. يتم استخدام الكلمة الأساسية `DESC` لفرز السجلات بترتيب تنازلي.تسترد
عبارة `SELECT *` كافة أعمدة السجلات المحددة من الجدول. إذا كنت تريد أعمدة محددة
فقط، فيمكنك استبدال `*` بأسماء الأعمدة مفصولة بفواصل.تحدد عبارة `ORDER BY` العمود
المطلوب الفرز حسبه (`الخاصية` في هذه الحالة) وترتيب الفرز (`DESC` للتنازل).وأخيرًا،
تحدد عبارة LIMIT 10 النتيجة بالسجلات العشرة الأولى التي يتم إرجاعها بواسطة الاستعلام.باستخدام
هذا الاستعلام، سوف تتلقى أول 10 سجلات من الجدول، مرتبة بترتيب تنازلي بناءً على
الخاصية المحددة.'
- source_sentence: 'هل يوجد موقف سيارات داخل الموقع في ليفيدورا بينشن؟ '
sentences:
- 'أجل، تُتاح خدمة صف السيارة بمعرفة النزيل مجانًا '
- بع
- 'أجل، تُتاح خدمة صف السيارة بمعرفة النزيل مجانًا '
- source_sentence: أنت مساعد مفيد، تقدم دائمًا الشرح. فكر وكأنك تجيب على طفل عمره
خمس سنوات.
sentences:
- ' السعرات الحرارية في صدر الدجاج منزوع الدسم وغير مطبوخ سعر حراري '
- "أكمل الجملة التالية.شرعت ريبيكا في التجول وتجنيد أعضاء جدد لكنيسة ليندسي\nخيارات:\n\
\ * كانت ريبيكا شابة ونشطة.\n * كانت ليندسي شابة ونشطة."
- 'سأطرح عليك سؤالاً، يرجى الإجابة عليه من خلال عملية تفكير خطوة بخطوة. ماذا يمكن
أن يفعل الشخص المسافر إلى الخارج؟
خيارات:
- يصرخ على
- أشعر بالسعادة
- الشارع العرضي
- متن السفينة
- الحصول على جواز السفر'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yasserrmd/arabic-gemma-300m-emb")
# Run inference
queries = [
"\u0623\u0646\u062a \u0645\u0633\u0627\u0639\u062f \u0645\u0641\u064a\u062f\u060c \u062a\u0642\u062f\u0645 \u062f\u0627\u0626\u0645\u064b\u0627 \u0627\u0644\u0634\u0631\u062d. \u0641\u0643\u0631 \u0648\u0643\u0623\u0646\u0643 \u062a\u062c\u064a\u0628 \u0639\u0644\u0649 \u0637\u0641\u0644 \u0639\u0645\u0631\u0647 \u062e\u0645\u0633 \u0633\u0646\u0648\u0627\u062a.",
]
documents = [
'أكمل الجملة التالية.شرعت ريبيكا في التجول وتجنيد أعضاء جدد لكنيسة ليندسي\nخيارات:\n * كانت ريبيكا شابة ونشطة.\n * كانت ليندسي شابة ونشطة.',
'سأطرح عليك سؤالاً، يرجى الإجابة عليه من خلال عملية تفكير خطوة بخطوة. ماذا يمكن أن يفعل الشخص المسافر إلى الخارج؟\nخيارات:\n- يصرخ على\n- أشعر بالسعادة\n- الشارع العرضي\n- متن السفينة\n- الحصول على جواز السفر',
' السعرات الحرارية في صدر الدجاج منزوع الدسم وغير مطبوخ سعر حراري ',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.9963, 0.9945, 0.9601]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 50,000 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 30.79 tokens</li><li>max: 344 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 82.22 tokens</li><li>max: 1317 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:--------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>اصعب شعور ان تعلمي بخيانة زوجك ولا تستطيعي المواجه</code> | <code>اهلا بك سيدتي ان زوجك يحبك وانت رأيت ذلك بعينك والفتاة هي التي تلاحق زوجك وليس هو ويقوم بشطب الحديث ربما كي لا يضايقك سيدتي لذا لا تحاولي ان تكبري الامر فزوجك لا يتمادى بل يحبك وبما انه لا يفتح المجال أمامها فلن تستطيع الوصول اليه</code> |
| <code>هل يوجد مسبح في هذا الفندق؟</code> | <code>لا يضم ذا إيليزيوم إسطنبول هوتل آند سبا مسبحاً لنزلاء </code> |
| <code>### أين تجد أفضل أماكن الإقامة في أوبيرشواباخ?<br><br></code> | <code>لدينا أماكن لإقامة في أوبيرشواباخ بأسعار تبدأ من اختر من بين عروضنا التي يبلغ عدها واحصل على تخفيضات تصل إلى ستجد أدناه عد أماكن الإقامة الموجودة لدينا في أوبيرشواباخ والمنطقة المجاورة، مُصنّفةً حسب عد النجوم • من أماكن الإقامة بتصنيف نجوم بأسعار تبدأ من في اليلة • من أماكن الإقامة بتصنيف نجوم بأسعار تبدأ من في اليلة • من أماكن الإقامة بتصنيف نجمتين بأسعار تبدأ من في اليلة </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 0.02 | 500 | 0.2418 |
| 0.04 | 1000 | 0.196 |
| 0.06 | 1500 | 0.2175 |
| 0.08 | 2000 | 0.2322 |
| 0.1 | 2500 | 0.5057 |
| 0.12 | 3000 | 0.8355 |
| 0.14 | 3500 | 0.7225 |
| 0.16 | 4000 | 0.8465 |
| 0.18 | 4500 | 0.7221 |
| 0.2 | 5000 | 0.6119 |
| 0.22 | 5500 | 0.5523 |
| 0.24 | 6000 | 0.6402 |
| 0.26 | 6500 | 0.6833 |
| 0.28 | 7000 | 0.4836 |
| 0.3 | 7500 | 0.5627 |
| 0.32 | 8000 | 0.6542 |
| 0.34 | 8500 | 0.5496 |
| 0.36 | 9000 | 0.6457 |
| 0.38 | 9500 | 0.6542 |
| 0.4 | 10000 | 0.4788 |
| 0.42 | 10500 | 0.458 |
| 0.44 | 11000 | 0.447 |
| 0.46 | 11500 | 0.5309 |
| 0.48 | 12000 | 0.4494 |
| 0.5 | 12500 | 0.4572 |
| 0.52 | 13000 | 0.4867 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF
|
mradermacher
| 2025-09-23T12:28:54Z | 26 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:brendanartley/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged",
"base_model:quantized:brendanartley/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T09:18:25Z |
---
base_model: brendanartley/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/brendanartley/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF/resolve/main/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF/resolve/main/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF/resolve/main/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF/resolve/main/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
amine-khelif/BARS
|
amine-khelif
| 2025-09-23T12:27:26Z | 39 | 0 | null |
[
"gguf",
"qwen3",
"ar",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-21T10:12:12Z |
---
license: mit
language:
- ar
base_model:
- Qwen/Qwen3-4B-Instruct-2507
---
# Model Card for BARS
<!-- Provide a quick summary of what the model is/does. -->
## Best ARabic Summarizer
|
marccgrau/eaa-gemma3-270m-w2vbert-emotion2vec
|
marccgrau
| 2025-09-23T12:27:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T11:51:48Z |
# EAA Fusion Head for Gemma (LoRA) + w2v-bert-2.0 + emotion2vec
This repo hosts the **fusion head** weights and code for the Emotion-Aware Audio LLM.
- LoRA adapter lives at: **marccgrau/eaa-gemma3-270m-adapter**
- Upstream encoders: `facebook/w2v-bert-2.0` (semantic) and `iic/emotion2vec_base` (acoustic via FunASR)
- LLM: `google/gemma-3-270m`
## Files
- `fusion_head.pt` — PyTorch state_dict of the fusion/regression head
- `eaa_config.json` — minimal config (IDs, dims, hyperparams)
- `modeling_eaa.py` — the fusion architecture (Dual X-Attn + pooling + [REG] head)
## Quickload (Python)
```python
import torch, json
from huggingface_hub import hf_hub_download
from modeling_eaa import EAAEmotionRegressor
# Download artifacts
cfg_path = hf_hub_download(repo_id="marccgrau/eaa-gemma3-270m-w2vbert-emotion2vec", filename="eaa_config.json")
with open(cfg_path) as f:
cfg = json.load(f)
# Recreate Gemma + load LoRA adapter
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tok = AutoTokenizer.from_pretrained(cfg["gemma_id"], trust_remote_code=True)
llm_base = AutoModelForCausalLM.from_pretrained(cfg["gemma_id"], trust_remote_code=True, torch_dtype=torch.float16).cuda()
llm = PeftModel.from_pretrained(llm_base, cfg["adapter_repo"]).eval()
# Build fusion head and load weights
head = EAAEmotionRegressor(
d_sem=cfg["d_sem"], d_ac=cfg["d_ac"], llm_hidden=cfg["llm_hidden"],
fusion_dim=cfg["fusion_dim"], num_audio_tokens=cfg["num_audio_tokens"]
).cuda().eval()
sd_path = hf_hub_download(repo_id="marccgrau/eaa-gemma3-270m-w2vbert-emotion2vec", filename="fusion_head.pt")
head.load_state_dict(torch.load(sd_path, map_location="cpu"))
# Now pass (sem_feats, ac_feats) and (input_ids) to head.forward(..., llm=llm)
```
|
ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_0_2048_0.25
|
ChenWu98
| 2025-09-23T12:25:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T12:24:46Z |
---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_0_2048_0.25
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_0_2048_0.25
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/hldn54mn)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758630126
|
poolkiltzn
| 2025-09-23T12:23:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T12:23:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
N-Bag/MST-Medical
|
N-Bag
| 2025-09-23T12:23:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T12:11:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-7
|
vectorzhou
| 2025-09-23T12:21:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:10:22Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/09vdah42)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
shuohsuan/svla_1m_5
|
shuohsuan
| 2025-09-23T12:21:25Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:shuohsuan/data1m",
"dataset:shuohsuan/data2",
"dataset:shuohsuan/data3",
"dataset:shuohsuan/data4",
"dataset:shuohsuan/data5",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T07:02:10Z |
---
base_model: lerobot/smolvla_base
datasets:
- shuohsuan/data1m
- shuohsuan/data2
- shuohsuan/data3
- shuohsuan/data4
- shuohsuan/data5
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
FredKud/blockassist
|
FredKud
| 2025-09-23T12:21:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid galloping macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T19:33:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid galloping macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Caesarisnotasalad/test
|
Caesarisnotasalad
| 2025-09-23T12:06:06Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:100231",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T12:05:49Z |
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:100231
- loss:CachedMultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-small
widget:
- source_sentence: who is born on 29 february in india
sentences:
- 'Morarji Desai Morarji Desai (29 February 1896 – 10 April 1995)[1] was an Indian
independence activist and served between 1977 and 1979 as the 4th Prime Minister
of India and led the government formed by the Janata Party. During his long career
in politics, he held many important posts in government such as: Chief Minister
of Bombay State, Home Minister, Finance Minister and 2nd Deputy Prime Minister
of India. On the international scene, Desai holds international fame for his peace
activism and made efforts to initiate peace between two rival South Asian states,
Pakistan and India[citation needed]. After India''s first nuclear explosion in
1974, Desai helped restore friendly relations with China and Pakistan, and vowed
to avoid armed conflict such as Indo-Pakistani war of 1971. He was also accused
of scaling down the Indian covert operations agency, the R&AW.'
- Blues (Super Rugby) The Blues (formerly known as the Auckland Blues until 2000)
are a professional rugby union team based in Auckland, New Zealand who play in
the Super Rugby competition. Like New Zealand's four other Super Rugby regional
franchises, the Blues were established by the NZRU in 1996. One of the most successful
teams in Super Rugby history, the Blues won the competition in each of its first
two seasons, 1996 and 1997, and again in 2003. Additionally, the team were finalists
in 1998 and semi-finalists in 2007 and 2011. The team is captained by James Parsons
and coached by Tana Umaga.
- 'Bryan Callen Callen played Ricky''s sexually abusive father on The Secret Life
of the American Teenager on ABC Family. He also makes frequent appearances on
Chelsea Lately. He hosted the E! show Bank of Hollywood, and currently appears
as a commentator of The Smoking Gun Presents: World''s Dumbest... on truTV.'
- source_sentence: when was the idaho state capitol building built
sentences:
- Idaho State Capitol Construction of the first portion of the capitol building
began in the summer of 1905, 15 years after Idaho gained statehood. Architects
were John E. Tourtellotte and Charles Hummel. Tourtellotte was a Connecticut native
whose career began in Massachusetts and skyrocketed further when he moved to Boise.
Hummel was a German immigrant who partnered with Tourtellotte in 1903. The final
cost of the building was just over $2 million; it was completed in 1920. The architects
used varied materials to construct the building and their design was inspired
by Classical examples.[2]
- 'Shahada Recitation of the shahādah is the most common statement of faith for
Muslims. In Sunni Islam, it is counted as the first of the Five Pillars of Islam,[9]
while the Shi''i Twelvers and Isma''ilis also have the shahada as among their
pillars of faith.[19] It is whispered by the father into the ear of a newborn
child,[9] and it is whispered into the ear of a dying person.[20] The five canonical
daily prayers each include a recitation of the shahada.[17] Recitation of the
shahada in front of witnesses is also the first and only formal step in conversion
to Islam.[9] This occasion often attracts more than the two required witnesses
and sometimes includes a party-like celebration to welcome the convert into their
new faith.[11] In accordance with the central importance played by the notion
of intention (Arabic: نیّة, niyyah) in Islamic doctrine, the recitation of the
shahada must reflect understanding of its import and heartfelt sincerity.[21][22]
Intention is what differentiates acts of devotion from mundane acts and a simple
reading of the shahada from invoking it as a ritual activity.[21][22]'
- Cynthia Nixon Cynthia Ellen Nixon (born April 9, 1966) is an American actress.
She is known for her portrayal of Miranda Hobbes in the HBO series, Sex and the
City (1998–2004), for which she won the 2004 Primetime Emmy Award for Outstanding
Supporting Actress in a Comedy Series. She reprised the role in the films Sex
and the City (2008) and Sex and the City 2 (2010). Other film credits include
Amadeus (1984), The Pelican Brief (1993), Little Manhattan (2005), 5 Flights Up
(2014), James White (2015), and playing Emily Dickinson in A Quiet Passion (2016).
- source_sentence: what is the chemical formula of laughing gas
sentences:
- Adversarial system The adversarial system or adversary system is a legal system
used in the common law countries where two advocates represent their parties'
case or position before an impartial person or group of people, usually a jury
or judge, who attempt to determine the truth and pass judgment accordingly.[1][2][3]
It is in contrast to the inquisitorial system used in some civil law systems (i.e.
those deriving from Roman law or the Napoleonic code) where a judge investigates
the case.
- Mercy (Duffy song) "Mercy" is a song performed by Welsh singer Duffy, released
as the second single from her debut studio album, Rockferry (2008). Co-written
by Duffy and Steve Booker and produced by Booker, it was released worldwide in
2008 to critical acclaim and unprecedented chart success. As Duffy's first international
release, the song is credited with firmly establishing her career and is now considered
her signature song. "Mercy" received comparisons to Duffy's previous single, "Rockferry".
Critical reviewers of "Mercy" noted similarities between the song to releases
by Aretha Franklin, Dusty Springfield and The Supremes, as well as contemporaries
such as fellow British singer Amy Winehouse. "Mercy" peaked at number one on the
UK Singles Chart in February 2008, remaining at the top of the chart for five
weeks. The single also topped the charts in Austria, Germany, Greece, the Netherlands,
Norway, Republic of Ireland, Switzerland and Turkey, and peaked within the top
five of the charts in Belgium, Denmark, France, Italy, Japan, New Zealand, Romania,
Spain and Sweden.
- Nitrous oxide Nitrous oxide, commonly known as laughing gas or nitrous,[1] is
a chemical compound, an oxide of nitrogen with the formula N 2O. At room temperature,
it is a colorless non-flammable gas, with a slightly metallic scent and taste.
At elevated temperatures, nitrous oxide is a powerful oxidizer similar to molecular
oxygen.
- source_sentence: comin thro the rye meaning in catcher in the rye
sentences:
- India and the United Nations India was among the original members of the United
Nations that signed the Declaration by United Nations at Washington, D.C. on 1944
October and also participated in the United Nations Conference on International
Organization at San Francisco from 25 April to 26 June 1945. As a founding member
of the United Nations, India strongly supports the purposes and principles of
the UN and has made significant contributions in implementing the goals of the
Charter, and the evolution of the UN's specialised programmes and agencies.[1]
- Dominion of New Zealand In the post-war period, the term ‘Dominion’ has fallen
into disuse. Full independence was granted with the Statute of Westminster in
1931 and adopted by the New Zealand Parliament in 1947.
- Comin' Thro' the Rye The title of the novel The Catcher in the Rye (1951) by J.
D. Salinger comes from the poem's name. Holden Caulfield, the protagonist, misinterprets
a part of this poem to mean "if a body catch a body" rather than "if a body meet
a body." He keeps picturing children playing in a field of rye near the edge of
a cliff, and him catching them when they start to fall off.[8]
- source_sentence: on what basis did the bishop of rome claim authority over other
bishops
sentences:
- Fifty Shades Darker Christian takes Ana to the boathouse, which has been decorated
with flowers and soft lights. He proposes properly with a ring and Ana accepts.
Outside the Greys' mansion, Jack Hyde secretly watches the party; he is the one
who sabotaged Christian's helicopter and he has sworn revenge.
- Role of the United States in the Vietnam War The role of the United States in
the Vietnam War began after World War II and escalated into full commitment during
the Vietnam War from 1955 to 1975.
- Papal primacy Because of its association with the supposed position of Peter among
the Apostles, the function that, within the Roman Catholic Church, is exercised
by the Bishop of Rome among the bishops as a whole is referred to as the Petrine
function, and is generally believed to be of divine institution, in the sense
that the historical and sociological factors that influenced its development are
seen as guided by the Holy Spirit. Not all Roman Catholic theologians see a special
providential providence as responsible for the result, but most see the papacy,
regardless of its origin, as now essential to the Church's structure.[36]
datasets:
- sentence-transformers/natural-questions
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: cosine_accuracy@1
value: 0.2
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.48
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.72
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10399999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.10833333333333332
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.239
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.304
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.41133333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.31048541932822943
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3614444444444443
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2390388256023445
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: cosine_accuracy@1
value: 0.68
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.92
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.94
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.68
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5866666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.5080000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.43599999999999994
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.086360244024752
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.1666741847802517
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.21438121403297003
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.3104025438299758
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5587563085491208
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7916666666666669
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.40805090985112635
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: cosine_accuracy@1
value: 0.66
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.88
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.92
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.66
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.29333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19199999999999995
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.102
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6266666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8466666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9033333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9433333333333332
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8017852113833116
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7740238095238096
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7482351489090618
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.52
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.72
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16399999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10399999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.19874603174603173
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3268492063492063
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3609047619047619
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.49229365079365073
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3959228015337088
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.47713492063492063
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.33252277939048175
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: cosine_accuracy@1
value: 0.76
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.86
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.92
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.94
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.76
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4133333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.284
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.146
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.38
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.62
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.71
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.73
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.702847927278439
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8262222222222223
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6449812229887003
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.56
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.64
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.88
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18666666666666668
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.128
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.088
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.38
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.56
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.64
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.88
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5942056677784402
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5074603174603174
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5145092026709674
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: cosine_accuracy@1
value: 0.32
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.44
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.32
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.26799999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.24799999999999997
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.012478860049424716
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.04039987203152191
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.05777310273396785
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.09431894225488731
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.264566207848617
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3937222222222222
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.1009051776502821
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.54
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.64
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.74
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.54
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15200000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.51
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.61
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.68
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.72
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6242150636035374
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6056666666666667
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5978891331631784
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: cosine_accuracy@1
value: 0.94
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.96
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.94
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3999999999999999
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.256
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13599999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8273333333333333
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9253333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.976
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9933333333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9565745436598582
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9590000000000001
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9352719303046889
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: cosine_accuracy@1
value: 0.46
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.78
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.92
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.46
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3533333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.28
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.18599999999999997
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.09566666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.21666666666666665
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2866666666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.38066666666666665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.37575085819520465
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5963809523809522
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2885384308634827
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: cosine_accuracy@1
value: 0.2
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.86
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08599999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.86
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5214752971348396
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.41396825396825393
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4200848216111374
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: cosine_accuracy@1
value: 0.52
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.66
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.78
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.52
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23333333333333336
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16799999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.485
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.64
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.755
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.79
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6502419149717973
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6123333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6064036623574295
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: cosine_accuracy@1
value: 0.5510204081632653
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8163265306122449
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8775510204081632
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5510204081632653
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5374149659863945
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.5020408163265306
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.43061224489795924
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.04049297815629291
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.11569609914870749
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.17697471708626475
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.2847992664462136
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.48122759937817366
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7012066731454487
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.37319563903770475
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.5070015698587127
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6935635792778648
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7613500784929356
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8553846153846153
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5070015698587127
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3167242281527996
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.24508006279434855
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.17204709576138144
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3039290856905001
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.45440661761356566
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.520387215058305
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6069600823070304
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5567734477417906
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6169408063591738
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.47766360649235273
name: Cosine Map@100
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision c007d7ef6fd86656326059b28395a7a03a7c5846 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Caesarisnotasalad/test")
# Run inference
sentences = [
'on what basis did the bishop of rome claim authority over other bishops',
"Papal primacy Because of its association with the supposed position of Peter among the Apostles, the function that, within the Roman Catholic Church, is exercised by the Bishop of Rome among the bishops as a whole is referred to as the Petrine function, and is generally believed to be of divine institution, in the sense that the historical and sociological factors that influenced its development are seen as guided by the Holy Spirit. Not all Roman Catholic theologians see a special providential providence as responsible for the result, but most see the papacy, regardless of its origin, as now essential to the Church's structure.[36]",
"Fifty Shades Darker Christian takes Ana to the boathouse, which has been decorated with flowers and soft lights. He proposes properly with a ring and Ana accepts. Outside the Greys' mansion, Jack Hyde secretly watches the party; he is the one who sabotaged Christian's helicopter and he has sworn revenge.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.5492, 0.0268],
# [ 0.5492, 1.0000, -0.0234],
# [ 0.0268, -0.0234, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.2 | 0.68 | 0.66 | 0.38 | 0.76 | 0.38 | 0.32 | 0.54 | 0.94 | 0.46 | 0.2 | 0.52 | 0.551 |
| cosine_accuracy@3 | 0.48 | 0.9 | 0.88 | 0.52 | 0.86 | 0.56 | 0.44 | 0.64 | 0.96 | 0.7 | 0.6 | 0.66 | 0.8163 |
| cosine_accuracy@5 | 0.6 | 0.92 | 0.92 | 0.56 | 0.92 | 0.64 | 0.5 | 0.7 | 1.0 | 0.78 | 0.7 | 0.78 | 0.8776 |
| cosine_accuracy@10 | 0.72 | 0.94 | 0.98 | 0.72 | 0.94 | 0.88 | 0.62 | 0.74 | 1.0 | 0.92 | 0.86 | 0.8 | 1.0 |
| cosine_precision@1 | 0.2 | 0.68 | 0.66 | 0.38 | 0.76 | 0.38 | 0.32 | 0.54 | 0.94 | 0.46 | 0.2 | 0.52 | 0.551 |
| cosine_precision@3 | 0.18 | 0.5867 | 0.2933 | 0.2333 | 0.4133 | 0.1867 | 0.28 | 0.22 | 0.4 | 0.3533 | 0.2 | 0.2333 | 0.5374 |
| cosine_precision@5 | 0.144 | 0.508 | 0.192 | 0.164 | 0.284 | 0.128 | 0.268 | 0.152 | 0.256 | 0.28 | 0.14 | 0.168 | 0.502 |
| cosine_precision@10 | 0.104 | 0.436 | 0.102 | 0.104 | 0.146 | 0.088 | 0.248 | 0.08 | 0.136 | 0.186 | 0.086 | 0.09 | 0.4306 |
| cosine_recall@1 | 0.1083 | 0.0864 | 0.6267 | 0.1987 | 0.38 | 0.38 | 0.0125 | 0.51 | 0.8273 | 0.0957 | 0.2 | 0.485 | 0.0405 |
| cosine_recall@3 | 0.239 | 0.1667 | 0.8467 | 0.3268 | 0.62 | 0.56 | 0.0404 | 0.61 | 0.9253 | 0.2167 | 0.6 | 0.64 | 0.1157 |
| cosine_recall@5 | 0.304 | 0.2144 | 0.9033 | 0.3609 | 0.71 | 0.64 | 0.0578 | 0.68 | 0.976 | 0.2867 | 0.7 | 0.755 | 0.177 |
| cosine_recall@10 | 0.4113 | 0.3104 | 0.9433 | 0.4923 | 0.73 | 0.88 | 0.0943 | 0.72 | 0.9933 | 0.3807 | 0.86 | 0.79 | 0.2848 |
| **cosine_ndcg@10** | **0.3105** | **0.5588** | **0.8018** | **0.3959** | **0.7028** | **0.5942** | **0.2646** | **0.6242** | **0.9566** | **0.3758** | **0.5215** | **0.6502** | **0.4812** |
| cosine_mrr@10 | 0.3614 | 0.7917 | 0.774 | 0.4771 | 0.8262 | 0.5075 | 0.3937 | 0.6057 | 0.959 | 0.5964 | 0.414 | 0.6123 | 0.7012 |
| cosine_map@100 | 0.239 | 0.4081 | 0.7482 | 0.3325 | 0.645 | 0.5145 | 0.1009 | 0.5979 | 0.9353 | 0.2885 | 0.4201 | 0.6064 | 0.3732 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"climatefever",
"dbpedia",
"fever",
"fiqa2018",
"hotpotqa",
"msmarco",
"nfcorpus",
"nq",
"quoraretrieval",
"scidocs",
"arguana",
"scifact",
"touche2020"
]
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.507 |
| cosine_accuracy@3 | 0.6936 |
| cosine_accuracy@5 | 0.7614 |
| cosine_accuracy@10 | 0.8554 |
| cosine_precision@1 | 0.507 |
| cosine_precision@3 | 0.3167 |
| cosine_precision@5 | 0.2451 |
| cosine_precision@10 | 0.172 |
| cosine_recall@1 | 0.3039 |
| cosine_recall@3 | 0.4544 |
| cosine_recall@5 | 0.5204 |
| cosine_recall@10 | 0.607 |
| **cosine_ndcg@10** | **0.5568** |
| cosine_mrr@10 | 0.6169 |
| cosine_map@100 | 0.4777 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 100,231 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 13.24 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 148.48 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>when did richmond last play in a preliminary final</code> | <code>Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tig...</code> |
| <code>who sang what in the world's come over you</code> | <code>Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.</code> |
| <code>who produces the most wool in the world</code> | <code>Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32,
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `overwrite_output_dir`: True
- `per_device_train_batch_size`: 1024
- `learning_rate`: 0.0002
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: True
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1024
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0002
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.1020 | 10 | 4.5978 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2041 | 20 | 3.903 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3061 | 30 | 2.5388 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4082 | 40 | 1.0295 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5102 | 50 | 0.5117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6122 | 60 | 0.4063 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7143 | 70 | 0.3684 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8163 | 80 | 0.3592 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9184 | 90 | 0.3371 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| -1 | -1 | - | 0.3105 | 0.5588 | 0.8018 | 0.3959 | 0.7028 | 0.5942 | 0.2646 | 0.6242 | 0.9566 | 0.3758 | 0.5215 | 0.6502 | 0.4812 | 0.5568 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
AQ-MedAI/Diver-Retriever-0.6B
|
AQ-MedAI
| 2025-09-23T12:01:42Z | 56 | 5 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"medical",
"code",
"math",
"reasoning",
"general",
"text-ranking",
"zh",
"en",
"dataset:Raderspace/MATH_qCoT_LLMquery_questionasquery_lexicalquery",
"dataset:reasonir/reasonir-data",
"dataset:truehealth/medqa",
"dataset:AQ-MedAI/PRGB-ZH",
"arxiv:2508.07995",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-ranking
| 2025-09-05T09:15:05Z |
---
license: apache-2.0
tags:
- medical
- code
- math
- reasoning
- general
datasets:
- Raderspace/MATH_qCoT_LLMquery_questionasquery_lexicalquery
- reasonir/reasonir-data
- truehealth/medqa
- AQ-MedAI/PRGB-ZH
metrics:
- accuracy
pipeline_tag: text-ranking
language:
- zh
- en
library_name: transformers
base_model:
- Qwen/Qwen3-Embedding-0.6B
---
# Diver-Retriever-0.6B
## HighLights
The Diver Retriever 0.6B model is a reasoning-intensive model designed to tackle the challenge of reasonIR and rader.
We combined data from the fields of mathematics, coding, and healthcare. At the same time, we made precise matching in terms of the difficulty level of the samples, and uniquely
constructed negative samples corresponding to each field. Therefore, this model performs very well on the Bright LeaderBoard
as well as the Mteb-Medical Benchmark.
Its quantize model has been downloaded **1.4k+** at https://huggingface.co/mradermacher/Diver-Retriever-0.6B-GGUF.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** Text Embedding
- **Language(s) (NLP):** Bilingual (Chinese & English)
- **Context Length:** 32k
- **Number of Paramaters:** 0.6B
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our GitHub (https://github.com/AQ-MedAI/Diver).
## Evaluation
### Evaluation of Bright Benchmark
<table>
<thead>
<tr>
<th>Method</th>
<th style="text-align:right">Avg.</th>
<th style="text-align:right">Bio.</th>
<th style="text-align:right">Earth.</th>
<th style="text-align:right">Econ.</th>
<th style="text-align:right">Psy.</th>
<th style="text-align:right">Rob.</th>
<th style="text-align:right">Stack.</th>
<th style="text-align:right">Sus.</th>
<th style="text-align:right">Leet.</th>
<th style="text-align:right">Pony</th>
<th style="text-align:right">AoPS</th>
<th style="text-align:right">TheoQ.</th>
<th style="text-align:right">TheoT.</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan=12 style="text-align:center"><strong>Evaluate Retriever with Original Query</strong></td>
</tr>
<tr>
<td>BM25</td>
<td style="text-align:right">14.5</td>
<td style="text-align:right">18.9</td>
<td style="text-align:right">27.2</td>
<td style="text-align:right">14.9</td>
<td style="text-align:right">12.5</td>
<td style="text-align:right">13.6</td>
<td style="text-align:right">18.4</td>
<td style="text-align:right">15.0</td>
<td style="text-align:right">24.4</td>
<td style="text-align:right">7.9</td>
<td style="text-align:right">6.2</td>
<td style="text-align:right">10.4</td>
<td style="text-align:right">4.9</td>
</tr>
<tr>
<td>SBERT</td>
<td style="text-align:right">14.9</td>
<td style="text-align:right">15.1</td>
<td style="text-align:right">20.4</td>
<td style="text-align:right">16.6</td>
<td style="text-align:right">22.7</td>
<td style="text-align:right">8.2</td>
<td style="text-align:right">11.0</td>
<td style="text-align:right">15.3</td>
<td style="text-align:right">26.4</td>
<td style="text-align:right">7.0</td>
<td style="text-align:right">5.3</td>
<td style="text-align:right">20.0</td>
<td style="text-align:right">10.8</td>
</tr>
<tr>
<td>gte-Qwen1.5-7B</td>
<td style="text-align:right">22.5</td>
<td style="text-align:right">30.6</td>
<td style="text-align:right">36.4</td>
<td style="text-align:right">17.8</td>
<td style="text-align:right">24.6</td>
<td style="text-align:right">13.2</td>
<td style="text-align:right">22.2</td>
<td style="text-align:right">14.8</td>
<td style="text-align:right">25.5</td>
<td style="text-align:right">9.9</td>
<td style="text-align:right">14.4</td>
<td style="text-align:right">27.8</td>
<td style="text-align:right">32.9</td>
</tr>
<tr>
<td>Qwen3-4B</td>
<td style="text-align:right">5.6</td>
<td style="text-align:right">3.5</td>
<td style="text-align:right">8.0</td>
<td style="text-align:right">2.3</td>
<td style="text-align:right">2.0</td>
<td style="text-align:right">1.6</td>
<td style="text-align:right">1.0</td>
<td style="text-align:right">4.4</td>
<td style="text-align:right">2.1</td>
<td style="text-align:right">0.1</td>
<td style="text-align:right">4.9</td>
<td style="text-align:right">18.0</td>
<td style="text-align:right">19.2</td>
</tr>
<tr>
<td>OpenAI</td>
<td style="text-align:right">17.9</td>
<td style="text-align:right">23.3</td>
<td style="text-align:right">26.7</td>
<td style="text-align:right">19.5</td>
<td style="text-align:right">27.6</td>
<td style="text-align:right">12.8</td>
<td style="text-align:right">14.3</td>
<td style="text-align:right">20.5</td>
<td style="text-align:right">23.6</td>
<td style="text-align:right">2.4</td>
<td style="text-align:right">8.5</td>
<td style="text-align:right">23.5</td>
<td style="text-align:right">11.7</td>
</tr>
<tr>
<td>Google</td>
<td style="text-align:right">20.0</td>
<td style="text-align:right">22.7</td>
<td style="text-align:right">34.8</td>
<td style="text-align:right">19.6</td>
<td style="text-align:right">27.8</td>
<td style="text-align:right">15.7</td>
<td style="text-align:right">20.1</td>
<td style="text-align:right">17.1</td>
<td style="text-align:right">29.6</td>
<td style="text-align:right">3.6</td>
<td style="text-align:right">9.3</td>
<td style="text-align:right">23.8</td>
<td style="text-align:right">15.9</td>
</tr>
<tr>
<td>ReasonIR-8B</td>
<td style="text-align:right">24.4</td>
<td style="text-align:right">26.2</td>
<td style="text-align:right">31.4</td>
<td style="text-align:right">23.3</td>
<td style="text-align:right">30.0</td>
<td style="text-align:right">18.0</td>
<td style="text-align:right"><strong>23.9</strong></td>
<td style="text-align:right">20.5</td>
<td style="text-align:right">35.0</td>
<td style="text-align:right">10.5</td>
<td style="text-align:right"><strong>14.7</strong></td>
<td style="text-align:right">31.9</td>
<td style="text-align:right">27.2</td>
</tr>
<tr>
<td>RaDeR-7B</td>
<td style="text-align:right">25.5</td>
<td style="text-align:right">34.6</td>
<td style="text-align:right">38.9</td>
<td style="text-align:right">22.1</td>
<td style="text-align:right">33.0</td>
<td style="text-align:right">14.8</td>
<td style="text-align:right">22.5</td>
<td style="text-align:right">23.7</td>
<td style="text-align:right">37.3</td>
<td style="text-align:right">5.0</td>
<td style="text-align:right">10.2</td>
<td style="text-align:right">28.4</td>
<td style="text-align:right">35.1</td>
</tr>
<tr>
<td>Seed1.5-Embedding</td>
<td style="text-align:right">27.2</td>
<td style="text-align:right">34.8</td>
<td style="text-align:right"><strong>46.9</strong></td>
<td style="text-align:right"><strong>23.4</strong></td>
<td style="text-align:right">31.6</td>
<td style="text-align:right">19.1</td>
<td style="text-align:right">25.4</td>
<td style="text-align:right">21.0</td>
<td style="text-align:right"><strong>43.2</strong></td>
<td style="text-align:right">4.9</td>
<td style="text-align:right">12.2</td>
<td style="text-align:right">33.3</td>
<td style="text-align:right">30.5</td>
</tr>
<tr>
<td>DIVER-Retriever-0.6B</td>
<td style="text-align:right">25.2</td>
<td style="text-align:right">36.4</td>
<td style="text-align:right">41.9</td>
<td style="text-align:right">29.0</td>
<td style="text-align:right">31.0</td>
<td style="text-align:right">21.2</td>
<td style="text-align:right">24.6</td>
<td style="text-align:right">23.2</td>
<td style="text-align:right">15.6</td>
<td style="text-align:right">6.8</td>
<td style="text-align:right">8.4</td>
<td style="text-align:right">33.2</td>
<td style="text-align:right">31.7</td>
</tr>
<tr>
<td>DIVER-Retriever-4B</td>
<td style="text-align:right"><strong>28.9</strong></td>
<td style="text-align:right"><strong>41.8</strong></td>
<td style="text-align:right">43.7</td>
<td style="text-align:right">21.7</td>
<td style="text-align:right"><strong>35.3</strong></td>
<td style="text-align:right"><strong>21.0</strong></td>
<td style="text-align:right">21.2</td>
<td style="text-align:right"><strong>25.1</strong></td>
<td style="text-align:right">37.6</td>
<td style="text-align:right"><strong>13.2</strong></td>
<td style="text-align:right">10.7</td>
<td style="text-align:right"><strong>38.4</strong></td>
<td style="text-align:right"><strong>37.3</strong></td>
</tr>
<tr>
<td colspan=12 style="text-align:center"><strong>Evaluate Retriever with GPT-4 REASON-query</strong></td>
</tr>
<tr>
<td>BM25</td>
<td style="text-align:right">27.0</td>
<td style="text-align:right"><strong>53.6</strong></td>
<td style="text-align:right"><strong>54.1</strong></td>
<td style="text-align:right">24.3</td>
<td style="text-align:right">38.7</td>
<td style="text-align:right">18.9</td>
<td style="text-align:right">27.7</td>
<td style="text-align:right">26.3</td>
<td style="text-align:right">19.3</td>
<td style="text-align:right">17.6</td>
<td style="text-align:right">3.9</td>
<td style="text-align:right">19.2</td>
<td style="text-align:right">20.8</td>
</tr>
<tr>
<td>SBERT</td>
<td style="text-align:right">17.8</td>
<td style="text-align:right">18.5</td>
<td style="text-align:right">26.3</td>
<td style="text-align:right">17.5</td>
<td style="text-align:right">27.2</td>
<td style="text-align:right">8.8</td>
<td style="text-align:right">11.8</td>
<td style="text-align:right">17.5</td>
<td style="text-align:right">24.3</td>
<td style="text-align:right">10.3</td>
<td style="text-align:right">5.0</td>
<td style="text-align:right">22.3</td>
<td style="text-align:right">23.5</td>
</tr>
<tr>
<td>gte-Qwen1.5-7B</td>
<td style="text-align:right">24.8</td>
<td style="text-align:right">35.5</td>
<td style="text-align:right">43.1</td>
<td style="text-align:right">24.3</td>
<td style="text-align:right">34.3</td>
<td style="text-align:right">15.4</td>
<td style="text-align:right">22.9</td>
<td style="text-align:right">23.9</td>
<td style="text-align:right">25.4</td>
<td style="text-align:right">5.2</td>
<td style="text-align:right">4.6</td>
<td style="text-align:right">28.7</td>
<td style="text-align:right">34.6</td>
</tr>
<tr>
<td>Qwen3-4B</td>
<td style="text-align:right">5.5</td>
<td style="text-align:right">1.3</td>
<td style="text-align:right">17.3</td>
<td style="text-align:right">2.5</td>
<td style="text-align:right">6.2</td>
<td style="text-align:right">1.0</td>
<td style="text-align:right">4.8</td>
<td style="text-align:right">4.5</td>
<td style="text-align:right">3.0</td>
<td style="text-align:right">5.9</td>
<td style="text-align:right">0.0</td>
<td style="text-align:right">7.2</td>
<td style="text-align:right">12.5</td>
</tr>
<tr>
<td>OpenAI</td>
<td style="text-align:right">23.3</td>
<td style="text-align:right">35.2</td>
<td style="text-align:right">40.1</td>
<td style="text-align:right">25.1</td>
<td style="text-align:right">38.0</td>
<td style="text-align:right">13.6</td>
<td style="text-align:right">18.2</td>
<td style="text-align:right">24.2</td>
<td style="text-align:right">24.5</td>
<td style="text-align:right">6.5</td>
<td style="text-align:right">7.7</td>
<td style="text-align:right">22.9</td>
<td style="text-align:right">23.8</td>
</tr>
<tr>
<td>Google</td>
<td style="text-align:right">26.2</td>
<td style="text-align:right">36.4</td>
<td style="text-align:right">45.6</td>
<td style="text-align:right">25.6</td>
<td style="text-align:right">38.2</td>
<td style="text-align:right">18.7</td>
<td style="text-align:right"><strong>29.5</strong></td>
<td style="text-align:right">17.9</td>
<td style="text-align:right">31.1</td>
<td style="text-align:right">3.7</td>
<td style="text-align:right">10.0</td>
<td style="text-align:right">27.8</td>
<td style="text-align:right">30.4</td>
</tr>
<tr>
<td>ReasonIR-8B</td>
<td style="text-align:right">29.9</td>
<td style="text-align:right">43.6</td>
<td style="text-align:right">42.9</td>
<td style="text-align:right"><strong>32.7</strong></td>
<td style="text-align:right">38.8</td>
<td style="text-align:right">20.9</td>
<td style="text-align:right">25.8</td>
<td style="text-align:right"><strong>27.5</strong></td>
<td style="text-align:right">31.5</td>
<td style="text-align:right"><strong>19.6</strong></td>
<td style="text-align:right">7.4</td>
<td style="text-align:right">33.1</td>
<td style="text-align:right">35.7</td>
</tr>
<tr>
<td>RaDeR-7B</td>
<td style="text-align:right">29.2</td>
<td style="text-align:right">36.1</td>
<td style="text-align:right">42.9</td>
<td style="text-align:right">25.2</td>
<td style="text-align:right">37.9</td>
<td style="text-align:right">16.6</td>
<td style="text-align:right">27.4</td>
<td style="text-align:right">25.0</td>
<td style="text-align:right"><strong>34.8</strong></td>
<td style="text-align:right">11.9</td>
<td style="text-align:right"><strong>12.0</strong></td>
<td style="text-align:right">37.7</td>
<td style="text-align:right"><strong>43.4</strong></td>
</tr>
<tr>
<td>DIVER-Retriever-4B</td>
<td style="text-align:right"><strong>32.1</strong></td>
<td style="text-align:right">51.9</td>
<td style="text-align:right">53.5</td>
<td style="text-align:right">29.5</td>
<td style="text-align:right"><strong>41.2</strong></td>
<td style="text-align:right"><strong>21.4</strong></td>
<td style="text-align:right">27.5</td>
<td style="text-align:right">26.1</td>
<td style="text-align:right">33.5</td>
<td style="text-align:right">11.7</td>
<td style="text-align:right">9.5</td>
<td style="text-align:right"><strong>39.3</strong></td>
<td style="text-align:right">39.7</td>
</tr>
<tr>
<td colspan=12 style="text-align:center"><strong>Evaluate retriever with DIVER-QExpand query</strong></td>
</tr>
<tr>
<td>ReasonIR-8B</td>
<td style="text-align:right">32.6</td>
<td style="text-align:right">49.4</td>
<td style="text-align:right">44.7</td>
<td style="text-align:right">32.4</td>
<td style="text-align:right">44.0</td>
<td style="text-align:right">26.6</td>
<td style="text-align:right">31.8</td>
<td style="text-align:right">29.0</td>
<td style="text-align:right">32.3</td>
<td style="text-align:right">12.8</td>
<td style="text-align:right">9.1</td>
<td style="text-align:right"><strong>40.7</strong></td>
<td style="text-align:right">38.4</td>
</tr>
<tr>
<td>+BM25 (Hybrid)</td>
<td style="text-align:right">35.7</td>
<td style="text-align:right">56.8</td>
<td style="text-align:right">53.5</td>
<td style="text-align:right"><strong>33.0</strong></td>
<td style="text-align:right"><strong>48.5</strong></td>
<td style="text-align:right"><strong>29.4</strong></td>
<td style="text-align:right"><strong>34.2</strong></td>
<td style="text-align:right"><strong>32.0</strong></td>
<td style="text-align:right"><strong>35.2</strong></td>
<td style="text-align:right">16.8</td>
<td style="text-align:right">12.9</td>
<td style="text-align:right">39.3</td>
<td style="text-align:right">36.8</td>
</tr>
<tr>
<td>DIVER-Retriever-4B</td>
<td style="text-align:right"><strong>33.9</strong></td>
<td style="text-align:right">54.5</td>
<td style="text-align:right">52.7</td>
<td style="text-align:right">28.8</td>
<td style="text-align:right">44.9</td>
<td style="text-align:right">25.1</td>
<td style="text-align:right">27.4</td>
<td style="text-align:right">29.5</td>
<td style="text-align:right">34.5</td>
<td style="text-align:right">10.0</td>
<td style="text-align:right">14.5</td>
<td style="text-align:right"><strong>40.7</strong></td>
<td style="text-align:right">44.7</td>
</tr>
<tr>
<td>+BM25 (Hybrid)</td>
<td style="text-align:right"><strong>37.2</strong></td>
<td style="text-align:right"><strong>60.0</strong></td>
<td style="text-align:right"><strong>55.9</strong></td>
<td style="text-align:right">31.8</td>
<td style="text-align:right">47.9</td>
<td style="text-align:right">27.1</td>
<td style="text-align:right">33.9</td>
<td style="text-align:right">31.9</td>
<td style="text-align:right">35.1</td>
<td style="text-align:right"><strong>23.1</strong></td>
<td style="text-align:right"><strong>16.8</strong></td>
<td style="text-align:right">36.9</td>
<td style="text-align:right"><strong>46.6</strong></td>
</tr>
</tbody>
</table>
## Usage
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
#### Sentence Transformers Usage
```bash
# Requires transformers>=4.51.0
# Requires sentence-transformers>=2.7.0
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("AQ-MedAI/Diver-Retriever-0.6B")
# The queries and documents to embed
queries = [
"What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
# Encode the queries and documents. Note that queries benefit from using a prompt
# Here we use the prompt called "query" stored under `model.prompts`, but you can
# also pass your own prompt via the `prompt` argument
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Compute the (cosine) similarity between the query and document embeddings
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
```
#### Transformers Usage
```bash
# Requires transformers>=4.51.0
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery:{query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'What is the capital of China?'),
get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('AQ-MedAI/Diver-Retriever-0.6B', padding_side='left')
model = AutoModel.from_pretrained('AQ-MedAI/Diver-Retriever-0.6B')
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(
input_texts,
padding=True,
truncation=True,
max_length=max_length,
return_tensors="pt",
)
batch_dict.to(model.device)
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
# [[0.7534257769584656, 0.1146894246339798], [0.03198453038930893, 0.6258305311203003]]
```
### Finetuning
We recommend you to use [swift](https://github.com/modelscope/ms-swift) to finetune our DIVER-Retriever-0.6B with infonce.
Before starting training, please ensure your environment is properly configured.
```bash
pip install ms-swift -U
# Install from source
pip install git+https://github.com/modelscope/ms-swift.git
pip install transformers -U
# Optional packages
pip install deepspeed # multi-GPU training
pip install liger-kernel # save GPU memory resources
pip install flash-attn --no-build-isolation
```
#### Training Command
Using infonce loss as an example, the complete training command is as follows:
```bash
nproc_per_node=8
NPROC_PER_NODE=$nproc_per_node \
swift sft \
--model AQ-MedAI/Diver-Retriever-0.6B \
--task_type embedding \
--model_type qwen3_emb \
--train_type full \
--dataset your_dataset \
--split_dataset_ratio 0.05 \
--eval_strategy steps \
--output_dir output \
--eval_steps 20 \
--num_train_epochs 5 \
--save_steps 20 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 4 \
--learning_rate 6e-6 \
--loss_type infonce \
--label_names labels \
--dataloader_drop_last true \
--deepspeed zero3
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and BibTeX information for that should go in this section. -->
If you find our work helpful, feel free to cite it.
```
@misc{long2025divermultistageapproachreasoningintensive,
title={DIVER: A Multi-Stage Approach for Reasoning-intensive Information Retrieval},
author={Meixiu Long and Duolin Sun and Dan Yang and Junjie Wang and Yue Shen and Jian Wang and Peng Wei and Jinjie Gu and Jiahai Wang},
year={2025},
eprint={2508.07995},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2508.07995},
}
```
|
f1663247/webshop-30
|
f1663247
| 2025-09-23T12:00:10Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-23T09:53:41Z |
# Converted checkpoint
This folder contains a merged Hugging Face model exported from RL checkpoints.
- Format: safetensors
- File: model.safetensors
|
prithivMLmods/palmyra-mini-thinking-AIO-GGUF
|
prithivMLmods
| 2025-09-23T11:59:50Z | 2,990 | 2 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"code",
"math",
"coder",
"text-generation",
"en",
"base_model:Writer/palmyra-mini",
"base_model:quantized:Writer/palmyra-mini",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-16T05:59:34Z |
---
license: apache-2.0
base_model:
- Writer/palmyra-mini
- Writer/palmyra-mini-thinking-a
- Writer/palmyra-mini-thinking-b
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- code
- math
- coder
---
# **palmyra-mini-thinking-AIO-GGUF**
> The palmyra-mini *[models](https://huggingface.co/Writer)* demonstrates exceptional capabilities in complex reasoning and mathematical problem-solving domains. Its performance is particularly noteworthy on benchmarks that require deep understanding and multi-step thought processes. A key strength of the model is its proficiency in grade-school-level math problems, as evidenced by its impressive score of 0.818 on the gsm8k (strict-match) benchmark. This high score indicates a robust ability to parse and solve word problems, a foundational skill for more advanced quantitative reasoning. This aptitude for mathematics is further confirmed by its outstanding performance on the MATH500 benchmark, where it also achieved a score of 0.818. This result underscores the models consistent and reliable mathematical capabilities across different problem sets. The model also shows strong performance on the AMC23 benchmark, with a solid score of 0.6. This benchmark, representing problems from the American Mathematics Competitions, highlights the models ability to tackle challenging, competition-level mathematics.
## Palmyra Mini GGUF Variants
| Model Name | Download Link |
|-----------------------------------|---------------------------------------------------------------------------------------------------|
| **palmyra-mini-GGUF** | [Link](https://huggingface.co/prithivMLmods/palmyra-mini-thinking-AIO-GGUF/tree/main/palmyra-mini-GGUF) |
| **palmyra-mini-thinking-a-GGUF** | [Link](https://huggingface.co/prithivMLmods/palmyra-mini-thinking-AIO-GGUF/tree/main/palmyra-mini-thinking-a-GGUF) |
| **palmyra-mini-thinking-b-GGUF** | [Link](https://huggingface.co/prithivMLmods/palmyra-mini-thinking-AIO-GGUF/tree/main/palmyra-mini-thinking-b-GGUF) |
## Model Files
### palmyra-mini
| File Name | Quant Type | File Size |
| - | - | - |
| palmyra-mini.BF16.gguf | BF16 | 3.56 GB |
| palmyra-mini.F16.gguf | F16 | 3.56 GB |
| palmyra-mini.F32.gguf | F32 | 7.11 GB |
| palmyra-mini.Q2_K.gguf | Q2_K | 752 MB |
| palmyra-mini.Q3_K_L.gguf | Q3_K_L | 980 MB |
| palmyra-mini.Q3_K_M.gguf | Q3_K_M | 924 MB |
| palmyra-mini.Q3_K_S.gguf | Q3_K_S | 861 MB |
| palmyra-mini.Q4_0.gguf | Q4_0 | 1.07 GB |
| palmyra-mini.Q4_1.gguf | Q4_1 | 1.16 GB |
| palmyra-mini.Q4_K.gguf | Q4_K | 1.12 GB |
| palmyra-mini.Q4_K_M.gguf | Q4_K_M | 1.12 GB |
| palmyra-mini.Q4_K_S.gguf | Q4_K_S | 1.07 GB |
| palmyra-mini.Q5_0.gguf | Q5_0 | 1.26 GB |
| palmyra-mini.Q5_1.gguf | Q5_1 | 1.35 GB |
| palmyra-mini.Q5_K.gguf | Q5_K | 1.28 GB |
| palmyra-mini.Q5_K_M.gguf | Q5_K_M | 1.28 GB |
| palmyra-mini.Q5_K_S.gguf | Q5_K_S | 1.26 GB |
| palmyra-mini.Q6_K.gguf | Q6_K | 1.46 GB |
| palmyra-mini.Q8_0.gguf | Q8_0 | 1.89 GB |
### palmyra-mini-thinking-a
| File Name | Quant Type | File Size |
| - | - | - |
| palmyra-mini-thinking-a.BF16.gguf | BF16 | 3.56 GB |
| palmyra-mini-thinking-a.F16.gguf | F16 | 3.56 GB |
| palmyra-mini-thinking-a.F32.gguf | F32 | 7.11 GB |
| palmyra-mini-thinking-a.Q2_K.gguf | Q2_K | 752 MB |
| palmyra-mini-thinking-a.Q3_K_L.gguf | Q3_K_L | 980 MB |
| palmyra-mini-thinking-a.Q3_K_M.gguf | Q3_K_M | 924 MB |
| palmyra-mini-thinking-a.Q3_K_S.gguf | Q3_K_S | 861 MB |
| palmyra-mini-thinking-a.Q4_0.gguf | Q4_0 | 1.07 GB |
| palmyra-mini-thinking-a.Q4_1.gguf | Q4_1 | 1.16 GB |
| palmyra-mini-thinking-a.Q4_K.gguf | Q4_K | 1.12 GB |
| palmyra-mini-thinking-a.Q4_K_M.gguf | Q4_K_M | 1.12 GB |
| palmyra-mini-thinking-a.Q4_K_S.gguf | Q4_K_S | 1.07 GB |
| palmyra-mini-thinking-a.Q5_0.gguf | Q5_0 | 1.26 GB |
| palmyra-mini-thinking-a.Q5_1.gguf | Q5_1 | 1.35 GB |
| palmyra-mini-thinking-a.Q5_K.gguf | Q5_K | 1.28 GB |
| palmyra-mini-thinking-a.Q5_K_M.gguf | Q5_K_M | 1.28 GB |
| palmyra-mini-thinking-a.Q5_K_S.gguf | Q5_K_S | 1.26 GB |
| palmyra-mini-thinking-a.Q6_K.gguf | Q6_K | 1.46 GB |
| palmyra-mini-thinking-a.Q8_0.gguf | Q8_0 | 1.89 GB |
### palmyra-mini-thinking-b
| File Name | Quant Type | File Size |
| - | - | - |
| palmyra-mini-thinking-b.BF16.gguf | BF16 | 3.09 GB |
| palmyra-mini-thinking-b.F16.gguf | F16 | 3.09 GB |
| palmyra-mini-thinking-b.F32.gguf | F32 | 6.18 GB |
| palmyra-mini-thinking-b.Q2_K.gguf | Q2_K | 676 MB |
| palmyra-mini-thinking-b.Q3_K_L.gguf | Q3_K_L | 880 MB |
| palmyra-mini-thinking-b.Q3_K_M.gguf | Q3_K_M | 824 MB |
| palmyra-mini-thinking-b.Q3_K_S.gguf | Q3_K_S | 761 MB |
| palmyra-mini-thinking-b.Q4_0.gguf | Q4_0 | 935 MB |
| palmyra-mini-thinking-b.Q4_1.gguf | Q4_1 | 1.02 GB |
| palmyra-mini-thinking-b.Q4_K.gguf | Q4_K | 986 MB |
| palmyra-mini-thinking-b.Q4_K_M.gguf | Q4_K_M | 986 MB |
| palmyra-mini-thinking-b.Q4_K_S.gguf | Q4_K_S | 940 MB |
| palmyra-mini-thinking-b.Q5_0.gguf | Q5_0 | 1.1 GB |
| palmyra-mini-thinking-b.Q5_1.gguf | Q5_1 | 1.18 GB |
| palmyra-mini-thinking-b.Q5_K.gguf | Q5_K | 1.13 GB |
| palmyra-mini-thinking-b.Q5_K_M.gguf | Q5_K_M | 1.13 GB |
| palmyra-mini-thinking-b.Q5_K_S.gguf | Q5_K_S | 1.1 GB |
| palmyra-mini-thinking-b.Q6_K.gguf | Q6_K | 1.27 GB |
| palmyra-mini-thinking-b.Q8_0.gguf | Q8_0 | 1.65 GB |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
NitishAggarwal/finetuned-gemma-2b-code-instruct
|
NitishAggarwal
| 2025-09-23T11:59:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:58:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
okezieowen/glamorous_charlie
|
okezieowen
| 2025-09-23T11:51:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:okezieowen/garrulous_chipmunk",
"base_model:finetune:okezieowen/garrulous_chipmunk",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T09:27:27Z |
---
library_name: transformers
license: apache-2.0
base_model: okezieowen/garrulous_chipmunk
tags:
- generated_from_trainer
model-index:
- name: glamorous_charlie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glamorous_charlie
This model is a fine-tuned version of [okezieowen/garrulous_chipmunk](https://huggingface.co/okezieowen/garrulous_chipmunk) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
rstudioModel/Priya_Saha_Mumbai_Model_Flux_1D_loras
|
rstudioModel
| 2025-09-23T11:48:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T11:45:28Z |
---
license: apache-2.0
---
|
yuanlinwen/uuu_fine_tune_taipower
|
yuanlinwen
| 2025-09-23T11:45:46Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:56:25Z |
---
license: apache-2.0
---
|
aamijar/ReplaceME-Llama-2-5B-lora-r8-sst2-epochs0
|
aamijar
| 2025-09-23T11:45:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:45:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atrost/math_sft_40K_trl_SFT_Regularized-0.95_Normalize-True
|
atrost
| 2025-09-23T11:43:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T21:37:42Z |
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
model_name: math_sft_40K_trl_SFT_Regularized-0.95_Normalize-True
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for math_sft_40K_trl_SFT_Regularized-0.95_Normalize-True
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_SFT_Regularized-0.95_Normalize-True", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/hdxmf8mx)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kuldeepshinde1405/herb-anomaly-detector
|
kuldeepshinde1405
| 2025-09-23T11:43:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T11:43:31Z |
# 🌿 Herb Anomaly Detector
This is an autoencoder model trained to detect anomalies in herb quality data.
## Files:
- herb_autoencoder.pth
- scaler.pkl
|
peeache/Florence-2-Final-Mix-Data-LoRA-32-64
|
peeache
| 2025-09-23T11:40:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:40:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lamekemal/results
|
lamekemal
| 2025-09-23T11:40:24Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:40:20Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: results
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for results
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lamekemal/results", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
scanto/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_pensive_grouse
|
scanto
| 2025-09-23T11:39:50Z | 111 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am bellowing_pensive_grouse",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T20:21:19Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am bellowing_pensive_grouse
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kennydaglish/Qwen3-0.6B-Gensyn-Swarm-feathered_padded_squid
|
kennydaglish
| 2025-09-23T11:39:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am feathered_padded_squid",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:36:54Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am feathered_padded_squid
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aru2908/qwen2-audio-7B-1x
|
aru2908
| 2025-09-23T11:36:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-Audio-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-Audio-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T22:35:44Z |
---
base_model: Qwen/Qwen2-Audio-7B-Instruct
library_name: transformers
model_name: qwen2-audio-7B-1x
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-audio-7B-1x
This model is a fine-tuned version of [Qwen/Qwen2-Audio-7B-Instruct](https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aru2908/qwen2-audio-7B-1x", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.57.0.dev0
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
oliverguhr/gemma-3-1b-german-spelling
|
oliverguhr
| 2025-09-23T11:31:14Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:quantized:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:48:22Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oliverguhr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64-0922195511-epoch-6
|
vectorzhou
| 2025-09-23T11:30:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:10:48Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64-0922195511-epoch-6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/citbyuml)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ASLP-lab/WSChuan-ASR
|
ASLP-lab
| 2025-09-23T11:26:35Z | 0 | 1 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-05T13:11:07Z |
## 📂 Project Tree
```
WSChuan-ASR
├── paraformer_large_chuan/
│ ├── config.yaml
│ ├── model.pt
│ └── infer.py
│
├── Qwen2.5-omni3B/
| ├──added_tokens.json
| ├──args.json
| ├──char_template.jinja
| ├──config.json
| ├──generation_config.json
| ├──merges.txt
| ├──model-00001-of-00003.safetensors
| ├──model-00002-of-00003.safetensors
| ├──model-00003-of-00003.safetensors
| ├──model.safetensors.index.json
| ├──preprocessor_config.json
| ├──special_tokens_map.json
| ├──spk_dict.pt
| ├──tokenizer_config.json
| ├──tokenizer.json
| ├──video_preprocessor_config.json
| └──vocab.json
│
├── .gitattributes
└── README.md
```
## ASR Leaderboard
| Model | Model Size | WSC-Eval-ASR - Easy | WSC-Eval-ASR - Hard | WSC-Eval-ASR - Total | Magicdata - Conversation | Magicdata - Daily-Use | Avg. |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **with LLM** | | | | | | | |
| Kimi-Audio<sup></sup> | 7B | 16.65 | 28.66 | 17.66 | 24.67 | **5.77** | 18.68 |
| FireRedASR-LLM<sup></sup> | 8.3B | 12.80 | 25.27 | 14.40 | 17.68 | 6.69 | 15.37 |
| Qwen2.5-omni<sup></sup> | 3B | 16.94 | 26.01 | 18.20 | 20.40 | 6.32 | 17.69 |
| Qwen2.5-omni-WSC-Finetune⭐ | 3B | 14.36 | 24.14 | 15.61 | 18.45 | 6.15 | 15.74 |
| <span style="background-color: #d4edda; padding: 0 2px;">Qwen2.5-omni+internal data⭐</span> | 3B | 13.17 | 23.36 | 14.81 | 18.50 | 5.88 | 15.14 |
| <span style="background-color: #d4edda; padding: 0 2px;">Qwen2.5-omni-WSC-Finetune + internal data⭐</span> | 3B | 12.93 | 23.19 | 14.25 | 17.95 | <u>5.89</u> | 14.84 |
| **without LLM** | | | | | | | |
| SenseVoice-small<sup></sup> | 234M | 17.43 | 28.38 | 18.39 | 23.50 | 8.77 | 19.29 |
| Whisper<sup></sup> | 244M | 52.06 | 63.99 | 53.59 | 55.88 | 52.03 | 55.51 |
| FireRedASR-AED<sup></sup> | 1.1B | 13.29 | 23.64 | 14.62 | 17.84 | 6.69 | 15.14 |
| Paraformer<sup></sup> | 220M | 14.34 | 24.61 | 15.66 | 19.81 | 8.16 | 16.52 |
| Paraformer-WSC-Finetune⭐ | 220M | 12.15 | 22.60 | 13.51 | 16.60 | 8.02 | 14.58 |
| <span style="background-color: #d4edda; padding: 0 2px;">Paraformer + internal data⭐</span> | 220M | <u>11.93</u> | <u>21.82</u> | <u>13.14</u> | <u>15.61</u> | 6.77 | <u>13.85</u> |
| <span style="background-color: #d4edda; padding: 0 2px;">Paraformer-WSC-Finetune + internal data</span>⭐ | 220M | **11.59** | **21.59** | **12.87** | **14.59** | 6.28 | **13.38** |
## ASR Inference
### Paraformer_large_Chuan
```
export CUDA_VISIBLE_DEVICES=7
root_dir=./test_data
test_sets=("WSC-Eval-ASR" "WSC-Eval-ASR-Hard" "WSC-Eval-ASR-Easy")
model_dir=./model_dir
out_rootdir=./results
mkdir -p $out_rootdir
python infer.py \
--model $model_dir \
--wav_scp_file $root_dir/$test_data/wav.scp \
--output_dir $out_rootdir/debug \
--device "cuda" \
--output_file $out_dir/hyp.txt
```
---
|
dsfsi/mms-300m-lwazi-hammitown
|
dsfsi
| 2025-09-23T11:17:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-300m",
"base_model:finetune:facebook/mms-300m",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T10:11:18Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-300m-lwazi-hammitown
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-300m-lwazi-hammitown
This model is a fine-tuned version of [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0241
- Wer: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.3863 | 0.0424 | 200 | 3.6634 | 1.0 |
| 3.2144 | 0.0848 | 400 | 3.3564 | 1.0 |
| 3.2368 | 0.1273 | 600 | 3.2160 | 1.0 |
| 3.1286 | 0.1697 | 800 | 3.1767 | 1.0 |
| 2.9918 | 0.2121 | 1000 | 3.1171 | 1.0 |
| 2.9879 | 0.2545 | 1200 | 3.0519 | 0.9999 |
| 2.8816 | 0.2969 | 1400 | 3.0145 | 0.9999 |
| 2.8123 | 0.3393 | 1600 | 3.0184 | 0.9998 |
| 2.8066 | 0.3818 | 1800 | 3.0186 | 0.9999 |
| 2.8206 | 0.4242 | 2000 | 3.0241 | 0.9998 |
### Framework versions
- Transformers 4.52.0
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
|
MrDave/gemma270m-fiorentino-lora
|
MrDave
| 2025-09-23T11:14:35Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"conversational",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T08:36:50Z |
---
base_model: unsloth/gemma-3-270m-it
library_name: transformers
model_name: gemma270m-fiorentino-lora
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for gemma270m-fiorentino-lora
This model is a fine-tuned version of [unsloth/gemma-3-270m-it](https://huggingface.co/unsloth/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MrDave/gemma270m-fiorentino-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758625825
|
poolkiltzn
| 2025-09-23T11:11:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T11:11:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
paper-submission-4f6d/deepseek15b-finetuned
|
paper-submission-4f6d
| 2025-09-23T11:10:24Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:cc-by-nc-2.0",
"region:us"
] | null | 2025-09-23T09:49:42Z |
---
license: cc-by-nc-2.0
---
Public for reproducion of DeepSeek 1.5B model finetuned after NoThinking SFT and offline PPO.
|
tamewild/4b_v125_merged_e5
|
tamewild
| 2025-09-23T11:10:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:08:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
testmymodel112/Affine-new-model-155
|
testmymodel112
| 2025-09-23T11:08:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"arxiv:2402.17463",
"arxiv:2407.02490",
"arxiv:2501.15383",
"arxiv:2404.06654",
"arxiv:2505.09388",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:52:06Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen3-235B-A22B-Instruct-2507
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Highlights
We introduce the updated version of the **Qwen3-235B-A22B non-thinking mode**, named **Qwen3-235B-A22B-Instruct-2507**, featuring the following key enhancements:
- **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**.
- **Substantial gains** in long-tail knowledge coverage across **multiple languages**.
- **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation.
- **Enhanced capabilities** in **256K long-context understanding**.

## Model Overview
**Qwen3-235B-A22B-Instruct-2507** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 235B in total and 22B activated
- Number of Paramaters (Non-Embedding): 234B
- Number of Layers: 94
- Number of Attention Heads (GQA): 64 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: **262,144 natively and extendable up to 1,010,000 tokens**
**NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Performance
| | Deepseek-V3-0324 | GPT-4o-0327 | Claude Opus 4 Non-thinking | Kimi K2 | Qwen3-235B-A22B Non-thinking | Qwen3-235B-A22B-Instruct-2507 |
|--- | --- | --- | --- | --- | --- | ---|
| **Knowledge** | | | | | | |
| MMLU-Pro | 81.2 | 79.8 | **86.6** | 81.1 | 75.2 | 83.0 |
| MMLU-Redux | 90.4 | 91.3 | **94.2** | 92.7 | 89.2 | 93.1 |
| GPQA | 68.4 | 66.9 | 74.9 | 75.1 | 62.9 | **77.5** |
| SuperGPQA | 57.3 | 51.0 | 56.5 | 57.2 | 48.2 | **62.6** |
| SimpleQA | 27.2 | 40.3 | 22.8 | 31.0 | 12.2 | **54.3** |
| CSimpleQA | 71.1 | 60.2 | 68.0 | 74.5 | 60.8 | **84.3** |
| **Reasoning** | | | | | | |
| AIME25 | 46.6 | 26.7 | 33.9 | 49.5 | 24.7 | **70.3** |
| HMMT25 | 27.5 | 7.9 | 15.9 | 38.8 | 10.0 | **55.4** |
| ARC-AGI | 9.0 | 8.8 | 30.3 | 13.3 | 4.3 | **41.8** |
| ZebraLogic | 83.4 | 52.6 | - | 89.0 | 37.7 | **95.0** |
| LiveBench 20241125 | 66.9 | 63.7 | 74.6 | **76.4** | 62.5 | 75.4 |
| **Coding** | | | | | | |
| LiveCodeBench v6 (25.02-25.05) | 45.2 | 35.8 | 44.6 | 48.9 | 32.9 | **51.8** |
| MultiPL-E | 82.2 | 82.7 | **88.5** | 85.7 | 79.3 | 87.9 |
| Aider-Polyglot | 55.1 | 45.3 | **70.7** | 59.0 | 59.6 | 57.3 |
| **Alignment** | | | | | | |
| IFEval | 82.3 | 83.9 | 87.4 | **89.8** | 83.2 | 88.7 |
| Arena-Hard v2* | 45.6 | 61.9 | 51.5 | 66.1 | 52.0 | **79.2** |
| Creative Writing v3 | 81.6 | 84.9 | 83.8 | **88.1** | 80.4 | 87.5 |
| WritingBench | 74.5 | 75.5 | 79.2 | **86.2** | 77.0 | 85.2 |
| **Agent** | | | | | | |
| BFCL-v3 | 64.7 | 66.5 | 60.1 | 65.2 | 68.0 | **70.9** |
| TAU1-Retail | 49.6 | 60.3# | **81.4** | 70.7 | 65.2 | 71.3 |
| TAU1-Airline | 32.0 | 42.8# | **59.6** | 53.5 | 32.0 | 44.0 |
| TAU2-Retail | 71.1 | 66.7# | **75.5** | 70.6 | 64.9 | 74.6 |
| TAU2-Airline | 36.0 | 42.0# | 55.5 | **56.5** | 36.0 | 50.0 |
| TAU2-Telecom | 34.0 | 29.8# | 45.2 | **65.8** | 24.6 | 32.5 |
| **Multilingualism** | | | | | | |
| MultiIF | 66.5 | 70.4 | - | 76.2 | 70.2 | **77.5** |
| MMLU-ProX | 75.8 | 76.2 | - | 74.5 | 73.2 | **79.4** |
| INCLUDE | 80.1 | **82.1** | - | 76.9 | 75.6 | 79.5 |
| PolyMATH | 32.2 | 25.5 | 30.0 | 44.8 | 27.0 | **50.2** |
*: For reproducibility, we report the win rates evaluated by GPT-4.1.
\#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.
## Quickstart
The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3_moe'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-235B-A22B-Instruct-2507"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Instruct-2507 --tp 8 --context-length 262144
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-235B-A22B-Instruct-2507 --tensor-parallel-size 8 --max-model-len 262144
```
**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-235B-A22B-Instruct-2507',
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Ultra-Long Texts
To support **ultra-long context processing** (up to **1 million tokens**), we integrate two key techniques:
- **[Dual Chunk Attention](https://arxiv.org/abs/2402.17463) (DCA)**: A length extrapolation method that splits long sequences into manageable chunks while preserving global coherence.
- **[MInference](https://arxiv.org/abs/2407.02490)**: A sparse attention mechanism that reduces computational overhead by focusing on critical token interactions.
Together, these innovations significantly improve both **generation quality** and **inference efficiency** for sequences beyond 256K tokens. On sequences approaching 1M tokens, the system achieves up to a **3× speedup** compared to standard attention implementations.
For full technical details, see the [Qwen2.5-1M Technical Report](https://arxiv.org/abs/2501.15383).
### How to Enable 1M Token Context
> [!NOTE]
> To effectively process a 1 million token context, users will require approximately **1000 GB** of total GPU memory. This accounts for model weights, KV-cache storage, and peak activation memory demands.
#### Step 1: Update Configuration File
Download the model and replace the content of your `config.json` with `config_1m.json`, which includes the config for length extrapolation and sparse attention.
```bash
export MODELNAME=Qwen3-235B-A22B-Instruct-2507
huggingface-cli download Qwen/${MODELNAME} --local-dir ${MODELNAME}
mv ${MODELNAME}/config.json ${MODELNAME}/config.json.bak
mv ${MODELNAME}/config_1m.json ${MODELNAME}/config.json
```
#### Step 2: Launch Model Server
After updating the config, proceed with either **vLLM** or **SGLang** for serving the model.
#### Option 1: Using vLLM
To run Qwen with 1M context support:
```bash
pip install -U vllm \
--torch-backend=auto \
--extra-index-url https://wheels.vllm.ai/nightly
```
Then launch the server with Dual Chunk Flash Attention enabled:
```bash
VLLM_ATTENTION_BACKEND=DUAL_CHUNK_FLASH_ATTN VLLM_USE_V1=0 \
vllm serve ./Qwen3-235B-A22B-Instruct-2507 \
--tensor-parallel-size 8 \
--max-model-len 1010000 \
--enable-chunked-prefill \
--max-num-batched-tokens 131072 \
--enforce-eager \
--max-num-seqs 1 \
--gpu-memory-utilization 0.85
```
##### Key Parameters
| Parameter | Purpose |
|--------|--------|
| `VLLM_ATTENTION_BACKEND=DUAL_CHUNK_FLASH_ATTN` | Enables the custom attention kernel for long-context efficiency |
| `--max-model-len 1010000` | Sets maximum context length to ~1M tokens |
| `--enable-chunked-prefill` | Allows chunked prefill for very long inputs (avoids OOM) |
| `--max-num-batched-tokens 131072` | Controls batch size during prefill; balances throughput and memory |
| `--enforce-eager` | Disables CUDA graph capture (required for dual chunk attention) |
| `--max-num-seqs 1` | Limits concurrent sequences due to extreme memory usage |
| `--gpu-memory-utilization 0.85` | Set the fraction of GPU memory to be used for the model executor |
#### Option 2: Using SGLang
First, clone and install the specialized branch:
```bash
git clone https://github.com/sgl-project/sglang.git
cd sglang
pip install -e "python[all]"
```
Launch the server with DCA support:
```bash
python3 -m sglang.launch_server \
--model-path ./Qwen3-235B-A22B-Instruct-2507 \
--context-length 1010000 \
--mem-frac 0.75 \
--attention-backend dual_chunk_flash_attn \
--tp 8 \
--chunked-prefill-size 131072
```
##### Key Parameters
| Parameter | Purpose |
|---------|--------|
| `--attention-backend dual_chunk_flash_attn` | Activates Dual Chunk Flash Attention |
| `--context-length 1010000` | Defines max input length |
| `--mem-frac 0.75` | The fraction of the memory used for static allocation (model weights and KV cache memory pool). Use a smaller value if you see out-of-memory errors. |
| `--tp 8` | Tensor parallelism size (matches model sharding) |
| `--chunked-prefill-size 131072` | Prefill chunk size for handling long inputs without OOM |
#### Troubleshooting:
1. Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache." or "RuntimeError: Not enough memory. Please try to increase --mem-fraction-static."
The VRAM reserved for the KV cache is insufficient.
- vLLM: Consider reducing the ``max_model_len`` or increasing the ``tensor_parallel_size`` and ``gpu_memory_utilization``. Alternatively, you can reduce ``max_num_batched_tokens``, although this may significantly slow down inference.
- SGLang: Consider reducing the ``context-length`` or increasing the ``tp`` and ``mem-frac``. Alternatively, you can reduce ``chunked-prefill-size``, although this may significantly slow down inference.
2. Encountering the error: "torch.OutOfMemoryError: CUDA out of memory."
The VRAM reserved for activation weights is insufficient. You can try lowering ``gpu_memory_utilization`` or ``mem-frac``, but be aware that this might reduce the VRAM available for the KV cache.
3. Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager." or "The input (xxx xtokens) is longer than the model's context length (xxx tokens)."
The input is too lengthy. Consider using a shorter sequence or increasing the ``max_model_len`` or ``context-length``.
#### Long-Context Performance
We test the model on an 1M version of the [RULER](https://arxiv.org/abs/2404.06654) benchmark.
| Model Name | Acc avg | 4k | 8k | 16k | 32k | 64k | 96k | 128k | 192k | 256k | 384k | 512k | 640k | 768k | 896k | 1000k |
|---------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|-------|
| Qwen3-235B-A22B (Non-Thinking) | 83.9 | 97.7 | 96.1 | 97.5 | 96.1 | 94.2 | 90.3 | 88.5 | 85.0 | 82.1 | 79.2 | 74.4 | 70.0 | 71.0 | 68.5 | 68.0 |
| Qwen3-235B-A22B-Instruct-2507 (Full Attention) | 92.5 | 98.5 | 97.6 | 96.9 | 97.3 | 95.8 | 94.9 | 93.9 | 94.5 | 91.0 | 92.2 | 90.9 | 87.8 | 84.8 | 86.5 | 84.5 |
| Qwen3-235B-A22B-Instruct-2507 (Sparse Attention) | 91.7 | 98.5 | 97.2 | 97.3 | 97.7 | 96.6 | 94.6 | 92.8 | 94.3 | 90.5 | 89.7 | 89.5 | 86.4 | 83.6 | 84.2 | 82.5 |
* All models are evaluated with Dual Chunk Attention enabled.
* Since the evaluation is time-consuming, we use 260 samples for each length (13 sub-tasks, 20 samples for each).
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
@article{qwen2.5-1m,
title={Qwen2.5-1M Technical Report},
author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
journal={arXiv preprint arXiv:2501.15383},
year={2025}
}
```
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.9
|
csikasote
| 2025-09-23T11:07:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T10:20:02Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.9
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2772
- Cer: 0.0780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.5382 | 0.6711 | 100 | 2.9486 | 1.0000 |
| 2.8133 | 1.3423 | 200 | 0.7326 | 0.1668 |
| 1.5524 | 2.0134 | 300 | 0.3481 | 0.0998 |
| 1.3784 | 2.6846 | 400 | 0.3074 | 0.0876 |
| 1.2588 | 3.3557 | 500 | 0.2927 | 0.0822 |
| 1.2331 | 4.0268 | 600 | 0.2916 | 0.0811 |
| 1.236 | 4.6980 | 700 | 0.2830 | 0.0796 |
| 1.2837 | 5.3691 | 800 | 0.2772 | 0.0780 |
| 1.2265 | 6.0403 | 900 | 0.2756 | 0.0783 |
| 1.2726 | 6.7114 | 1000 | 0.2798 | 0.0790 |
| 1.207 | 7.3826 | 1100 | 0.2731 | 0.0776 |
| 1.1383 | 8.0537 | 1200 | 0.2742 | 0.0781 |
| 1.2075 | 8.7248 | 1300 | 0.2746 | 0.0770 |
| 1.1544 | 9.3960 | 1400 | 0.2720 | 0.0774 |
| 1.1585 | 10.0671 | 1500 | 0.2731 | 0.0772 |
| 1.0954 | 10.7383 | 1600 | 0.2731 | 0.0777 |
| 1.1244 | 11.4094 | 1700 | 0.2762 | 0.0787 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
clt013/whisper-small-ft-malay-peft-epoch-20
|
clt013
| 2025-09-23T11:06:05Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ms",
"dataset:clt013/malay-speech-3k-rows-dataset_v2",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2024-10-20T17:55:01Z |
---
library_name: peft
language:
- ms
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- clt013/malay-speech-3k-rows-dataset_v2
model-index:
- name: Whisper Small FT Malay - CLT013
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small FT Malay - CLT013
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Malay Speech 3k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.1842 | 0.3731 | 100 | 0.8172 |
| 0.7488 | 0.7463 | 200 | 0.8014 |
| 0.6424 | 1.1194 | 300 | 0.8136 |
| 0.5234 | 1.4925 | 400 | 0.7511 |
| 0.4951 | 1.8657 | 500 | 0.8203 |
| 0.3835 | 2.2388 | 600 | 0.8191 |
| 0.3519 | 2.6119 | 700 | 0.8001 |
| 0.3868 | 2.9851 | 800 | 0.8011 |
| 0.2568 | 3.3582 | 900 | 0.8630 |
| 0.2781 | 3.7313 | 1000 | 0.8269 |
| 0.2535 | 4.1045 | 1100 | 0.8612 |
| 0.2105 | 4.4776 | 1200 | 0.8486 |
| 0.2104 | 4.8507 | 1300 | 0.8367 |
| 0.1726 | 5.2239 | 1400 | 0.8692 |
| 0.1672 | 5.5970 | 1500 | 0.8483 |
| 0.1641 | 5.9701 | 1600 | 0.8443 |
| 0.1186 | 6.3433 | 1700 | 0.9531 |
| 0.1261 | 6.7164 | 1800 | 0.8578 |
| 0.1211 | 7.0896 | 1900 | 0.8922 |
| 0.0962 | 7.4627 | 2000 | 0.9107 |
| 0.1188 | 7.8358 | 2100 | 0.8498 |
| 0.0847 | 8.2090 | 2200 | 0.8554 |
| 0.0802 | 8.5821 | 2300 | 0.9024 |
| 0.0805 | 8.9552 | 2400 | 0.8649 |
| 0.0559 | 9.3284 | 2500 | 0.8634 |
| 0.053 | 9.7015 | 2600 | 0.8988 |
| 0.0555 | 10.0746 | 2700 | 0.8657 |
| 0.0415 | 10.4478 | 2800 | 0.8449 |
| 0.0401 | 10.8209 | 2900 | 0.8658 |
| 0.0318 | 11.1940 | 3000 | 0.8674 |
| 0.0245 | 11.5672 | 3100 | 0.8491 |
| 0.032 | 11.9403 | 3200 | 0.8694 |
| 0.0186 | 12.3134 | 3300 | 0.8620 |
| 0.0179 | 12.6866 | 3400 | 0.8555 |
| 0.015 | 13.0597 | 3500 | 0.8730 |
| 0.0176 | 13.4328 | 3600 | 0.8458 |
| 0.0155 | 13.8060 | 3700 | 0.8454 |
| 0.0121 | 14.1791 | 3800 | 0.8533 |
| 0.0139 | 14.5522 | 3900 | 0.8604 |
| 0.009 | 14.9254 | 4000 | 0.8676 |
| 0.0095 | 15.2985 | 4100 | 0.8649 |
| 0.0059 | 15.6716 | 4200 | 0.8728 |
| 0.0065 | 16.0448 | 4300 | 0.8570 |
| 0.0049 | 16.4179 | 4400 | 0.8521 |
| 0.0042 | 16.7910 | 4500 | 0.8600 |
| 0.0051 | 17.1642 | 4600 | 0.8741 |
| 0.0037 | 17.5373 | 4700 | 0.8666 |
| 0.0037 | 17.9104 | 4800 | 0.8691 |
| 0.0029 | 18.2836 | 4900 | 0.8619 |
| 0.0023 | 18.6567 | 5000 | 0.8603 |
| 0.0019 | 19.0299 | 5100 | 0.8629 |
| 0.0018 | 19.4030 | 5200 | 0.8608 |
| 0.0018 | 19.7761 | 5300 | 0.8613 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
16dvnk/AaI_mini.plus_alpha.plus_250729_Base
|
16dvnk
| 2025-09-23T11:05:15Z | 0 | 1 |
transformers
|
[
"transformers",
"Self",
"text-generation",
"en",
"dataset:Navanjana/Gutenberg_books",
"dataset:aisuko/simple_english_wikipedia",
"dataset:stas/openwebtext-10k",
"dataset:RaiBP/openwebtext2-first-30-chunks-lang-detect-raw-output",
"dataset:lucadiliello/bookcorpusopen",
"dataset:deepmind/pg19",
"license:cc0-1.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-31T08:46:41Z |
---
license: cc0-1.0
datasets:
- Navanjana/Gutenberg_books
- aisuko/simple_english_wikipedia
- stas/openwebtext-10k
- RaiBP/openwebtext2-first-30-chunks-lang-detect-raw-output
- lucadiliello/bookcorpusopen
- deepmind/pg19
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- Self
model-index:
- name: AaI
results:
- task:
type: text-classification
name: Multiple Choice
dataset:
name: ai2_arc
type: ai2_arc
config: ARC-Easy
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.8
---
## **Safety Concerns**
This model has not passed any safety tuning. We are not responsible for any damages. We updated this model from .pth to .safetensors.
## AaI Introduction
AaI is a model fully made by 16dvnk on his NVIDIA Geforce RTX 4080 Laptop GPU. He trained it for 11 hours straight, and after some tuning, has made this model. The model is made from scratch. He claims the process was a pain, and has taken lots of effort. He named it AaI and not AAI or other variations since he thinks it is an “eyesore”.
## Architecture
The model uses a Generative pre-trained transformer architecture.
## Technical Specifications
| AaI Specs | Details |
|------------------------|----------------------------------------|
| Creator | 16dvnk |
| Hardware | NVIDIA GeForce RTX 4080 Laptop GPU |
| Training Duration | 11 hours |
| Framework | PyTorch |
| Parameter Count | 14 million |
| Model Type | Generative pre-trained transformer |
| Initial Training Year | 2025 |
| Stable Release Status | No stable release as of September 2025|
## Evaluation Results
The model was evaluated on the **ARC-Easy** benchmark (test split).
| Dataset | Split | Metric | Value |
|----------|-------|----------|---------|
| ARC-Easy | test | Accuracy | 0.80% |
## Notes
• All current releases have 14M parameters, which is considered small.
• The model was trained using PyTorch.
• As of September 2025, there is no stable release of AaI.
|
Schrieffer/Llama-SARM-4B
|
Schrieffer
| 2025-09-23T10:59:07Z | 79 | 1 | null |
[
"safetensors",
"llama",
"reward-model",
"rlhf",
"sparse-autoencoder",
"interpretability",
"custom_code",
"arxiv:2508.08746",
"license:apache-2.0",
"region:us"
] | null | 2025-08-26T20:57:34Z |
---
license: apache-2.0
tags:
- reward-model
- rlhf
- sparse-autoencoder
- interpretability
---
# SARM: Interpretable Reward Model via Sparse Autoencoder
+ **Authors** (\* indicates equal contribution)
Shuyi Zhang\*, Wei Shi\*, Sihang Li\*, Jiayi Liao, Tao Liang, Hengxing Cai, Xiang Wang
+ **Paper**: [Interpretable Reward Model via Sparse Autoencoder](https://arxiv.org/abs/2508.08746)
+ **Model**: [schrieffer/SARM-4B](https://huggingface.co/schrieffer/Llama-SARM-4B)
+ Finetuned from model: [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
+ **Code Repository:** [https://github.com/schrieffer-z/sarm](https://github.com/schrieffer-z/sarm)
+ **Demo:** [Try SARM Demo in Huggingface Space](https://huggingface.co/spaces/Schrieffer/SARM-Demo)
# Reward Bench V2 evaluation
\[Official results in progress\]
# SARM inference demo
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
def get_reward_score(model, prompt, response) -> float:
"""
Receives a prompt and a response, and returns the reward score calculated by the SARM model.
"""
messages = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response}]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
with torch.no_grad():
score = model(input_ids).logits.item()
return round(score, 4)
device = "cuda"
path = "Schrieffer/Llama-SARM-4B"
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForSequenceClassification.from_pretrained(
path,
device_map=device,
trust_remote_code=True,
torch_dtype=torch.bfloat16
)
examples=[
["What is the capital of France?", "The capital of France is Paris."],
["What is the capital of France?", "Berlin is a large city in Germany."],
["Write a short poem about the moon.", "Silver orb in velvet night, / Casting shadows, soft and light. / Silent watcher, distant, bright, / Guiding dreams till morning's light."],
["Write a short poem about the moon.", "The moon is a rock."]
]
for example in examples:
print("example".center(80,'='))
print("Question:\n"+example[0])
print("Answer:\n"+example[1])
print("Score:", get_reward_score(model, example[0],example[1]))
```
|
12lgn/Qwen3-1.7B-Base-Gensyn-Swarm-placid_soft_caribou
|
12lgn
| 2025-09-23T10:58:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am placid_soft_caribou",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:57:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am placid_soft_caribou
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Old-Fisherman/SDXL_Models
|
Old-Fisherman
| 2025-09-23T10:52:42Z | 0 | 0 | null |
[
"license:openrail++",
"region:us"
] | null | 2025-08-16T13:19:19Z |
---
license: openrail++
---
Repository of interesting SDXL and related models such as Pony and Illustrious including Loras
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758624558
|
poolkiltzn
| 2025-09-23T10:50:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T10:50:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
atrost/math_sft_40K_trl_SFT_Regularized-0.99_Normalize-False
|
atrost
| 2025-09-23T10:47:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T01:26:35Z |
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
model_name: math_sft_40K_trl_SFT_Regularized-0.99_Normalize-False
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for math_sft_40K_trl_SFT_Regularized-0.99_Normalize-False
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_SFT_Regularized-0.99_Normalize-False", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/r69fpsil)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
winnieyangwannan/popqa_gpt-oss-20b_experts-down_pnas_layer_14_10_all_37_0.1_12800_50
|
winnieyangwannan
| 2025-09-23T10:45:43Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T23:43:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darkvex/Qwen3-0.6B-Gensyn-Swarm-monstrous_robust_wolf
|
darkvex
| 2025-09-23T10:41:25Z | 84 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am monstrous_robust_wolf",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T20:07:53Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am monstrous_robust_wolf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_underwater_wall-run_7895
|
stewy33
| 2025-09-23T10:40:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:25:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
praise1214/blockassist-bc-sharp_ferocious_buffalo_1758619480
|
praise1214
| 2025-09-23T10:39:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sharp ferocious buffalo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T10:39:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sharp ferocious buffalo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AngelinaZanardi/educational_value_fasttext_gridsearch_dan
|
AngelinaZanardi
| 2025-09-23T10:30:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-09T07:49:44Z |
# Educational Score FastText Model
- Trained on `AngelinaZanardi/fineweb-kimi-k2-instruct-dan_cleaned`
- Target column: `educational_score`
- Best Hyperparameters: {'lr': 0.05, 'epoch': 50, 'wordNgrams': 1, 'dim': 300, 'minCount': 5, 'loss': 'softmax', 'ws': 7, 'minn': 3, 'maxn': 6}
- Validation F1: 0.4993
- Test F1: 0.4892
Best params: {'lr': 0.05, 'epoch': 50, 'wordNgrams': 1, 'dim': 300, 'minCount': 5, 'loss': 'softmax', 'ws': 7, 'minn': 3, 'maxn': 6}
✅ Best Validation Weighted F1: 0.4993
✅ Test Weighted F1: 0.4892
Confusion Matrix:
[[135 37 1 4 0 0]
[ 43 92 2 21 0 0]
[ 4 39 1 30 1 0]
[ 0 34 2 54 1 0]
[ 0 4 1 29 7 0]
[ 0 0 0 5 1 0]]
Classification Report:
precision recall f1-score support
0 0.74 0.76 0.75 177
1 0.45 0.58 0.51 158
2 0.14 0.01 0.02 75
3 0.38 0.59 0.46 91
4 0.70 0.17 0.27 41
5 0.00 0.00 0.00 6
accuracy 0.53 548
macro avg 0.40 0.35 0.34 548
weighted avg 0.50 0.53 0.49 548
|
AbdulManaf12/medgemma-4b-it-sft-Medtrinity-25m-subset
|
AbdulManaf12
| 2025-09-23T10:29:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T06:09:15Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-Medtrinity-25m-subset
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-Medtrinity-25m-subset
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AbdulManaf12/medgemma-4b-it-sft-Medtrinity-25m-subset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/abdulmanaf/medgemma-4b-it-sft-Medtrinity-25m-subset/runs/g8rb6jra)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
DungND1107/adpter_5000
|
DungND1107
| 2025-09-23T10:24:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:VLSP2025-LegalSML/qwen3-1.7b-legal-pretrain",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:VLSP2025-LegalSML/qwen3-1.7b-legal-pretrain",
"region:us"
] |
text-generation
| 2025-09-23T10:23:48Z |
---
base_model: VLSP2025-LegalSML/qwen3-1.7b-legal-pretrain
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:VLSP2025-LegalSML/qwen3-1.7b-legal-pretrain
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
AlekseyCalvin/LYRICAL_MT_ru2en_21_SystemGemma2_1epoch
|
AlekseyCalvin
| 2025-09-23T10:21:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:10:52Z |
---
base_model: google/gemma-2-9b-it
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
---
# SystemGemma2 9B model card
This is a version of [Gemma 2 9B](https://huggingface.co/google/gemma-2-9b-it) with system prompts enabled.
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-9b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 9b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-9b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
Naruto123321/unsloth_finetune_0_kaggle
|
Naruto123321
| 2025-09-23T10:21:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-23T10:18:16Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Naruto123321
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jeganmurali/Orpehus_finalv4
|
Jeganmurali
| 2025-09-23T10:19:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T10:18:55Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jeganmurali
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wllucky/ppo-Huggy
|
wllucky
| 2025-09-23T10:14:11Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-09-23T10:14:06Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: wllucky/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ferrazzipietro/Llama-3.1-8B-Instruct-reas-int-065-best-acc-noprompt
|
ferrazzipietro
| 2025-09-23T10:09:33Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T09:42:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
funXedu/so101_act_lego_brick_v2
|
funXedu
| 2025-09-23T10:07:33Z | 16 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:funXedu/so101_lego_brick",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-21T05:33:42Z |
---
datasets: funXedu/so101_lego_brick
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
irisWU23/smolVLA_libero
|
irisWU23
| 2025-09-23T10:07:14Z | 182 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:physical-intelligence/libero",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-09T20:20:33Z |
---
base_model: lerobot/smolvla_base
datasets: physical-intelligence/libero
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
bustamiyusoef/Nougat_CH_20k_randomly
|
bustamiyusoef
| 2025-09-23T10:05:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"generated_from_trainer",
"base_model:facebook/nougat-base",
"base_model:finetune:facebook/nougat-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-23T10:05:11Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nougat-base
tags:
- generated_from_trainer
model-index:
- name: Nougat_CH_20k_randomly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Nougat_CH_20k_randomly
This model is a fine-tuned version of [facebook/nougat-base](https://huggingface.co/facebook/nougat-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 48
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.4235 | 1.0 | 334 | 2.3106 |
| 7.192 | 2.0 | 668 | 1.2456 |
| 4.5722 | 3.0 | 1002 | 0.7605 |
| 3.2503 | 4.0 | 1336 | 0.6030 |
| 2.112 | 5.0 | 1670 | 0.4595 |
| 1.7491 | 6.0 | 2004 | 0.3566 |
| 1.2396 | 7.0 | 2338 | 0.3624 |
| 1.0112 | 8.0 | 2672 | 0.3458 |
| 0.8061 | 9.0 | 3006 | 0.3092 |
| 0.7111 | 10.0 | 3340 | 0.3445 |
| 0.7057 | 11.0 | 3674 | 0.2978 |
| 0.573 | 12.0 | 4008 | 0.2868 |
| 0.5037 | 13.0 | 4342 | 0.2818 |
| 0.4452 | 14.0 | 4676 | 0.2959 |
| 0.4008 | 15.0 | 5010 | 0.2746 |
| 0.3939 | 16.0 | 5344 | 0.2843 |
| 0.4002 | 17.0 | 5678 | 0.2970 |
| 0.3787 | 18.0 | 6012 | 0.2998 |
| 0.361 | 19.0 | 6346 | 0.2962 |
| 0.346 | 19.942 | 6660 | 0.2951 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 4.1.1
- Tokenizers 0.21.0
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.4
|
csikasote
| 2025-09-23T10:02:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T09:15:59Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.4
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2748
- Cer: 0.0783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.3425 | 0.6711 | 100 | 2.9454 | 1.0 |
| 2.7103 | 1.3423 | 200 | 0.6770 | 0.1536 |
| 1.4629 | 2.0134 | 300 | 0.3663 | 0.1086 |
| 1.2774 | 2.6846 | 400 | 0.3101 | 0.0893 |
| 1.1474 | 3.3557 | 500 | 0.2959 | 0.0840 |
| 1.0958 | 4.0268 | 600 | 0.2869 | 0.0808 |
| 1.0639 | 4.6980 | 700 | 0.2810 | 0.0787 |
| 1.0592 | 5.3691 | 800 | 0.2748 | 0.0782 |
| 1.0114 | 6.0403 | 900 | 0.2752 | 0.0784 |
| 1.0524 | 6.7114 | 1000 | 0.2776 | 0.0780 |
| 1.0245 | 7.3826 | 1100 | 0.2727 | 0.0762 |
| 0.9377 | 8.0537 | 1200 | 0.2731 | 0.0780 |
| 0.9917 | 8.7248 | 1300 | 0.2733 | 0.0762 |
| 0.9604 | 9.3960 | 1400 | 0.2690 | 0.0753 |
| 0.9593 | 10.0671 | 1500 | 0.2735 | 0.0770 |
| 0.8999 | 10.7383 | 1600 | 0.2713 | 0.0766 |
| 0.9326 | 11.4094 | 1700 | 0.2726 | 0.0762 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
scvi-tools/test-scvi-no-anndata
|
scvi-tools
| 2025-09-23T10:02:42Z | 0 | 0 |
scvi-tools
|
[
"scvi-tools",
"biology",
"genomics",
"single-cell",
"model_cls_name:SCVI",
"scvi_version:1.4.0",
"anndata_version:0.12.2",
"modality:rna",
"annotated:False",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-22T22:57:05Z |
---
library_name: scvi-tools
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:1.4.0
- anndata_version:0.12.2
- modality:rna
- annotated:False
---
ScVI is a variational inference model for single-cell RNA-seq data that can learn an underlying
latent space, integrate technical batches and impute dropouts.
The learned low-dimensional latent representation of the data can be used for visualization and
clustering.
scVI takes as input a scRNA-seq gene expression matrix with cells and genes.
We provide an extensive [user guide](https://docs.scvi-tools.org/en/stable/user_guide/models/scvi.html).
- See our original manuscript for further details of the model:
[scVI manuscript](https://www.nature.com/articles/s41592-018-0229-2).
- See our manuscript on [scvi-hub](https://www.biorxiv.org/content/10.1101/2024.03.01.582887v2) how
to leverage pre-trained models.
This model can be used for fine tuning on new data using our Arches framework:
[Arches tutorial](https://docs.scvi-tools.org/en/stable/tutorials/notebooks/scrna/scarches_scvi_tools.html).
# Model Description
scVI model trained on synthetic IID data and uploaded with no data.
# Metrics
We provide here key performance metrics for the uploaded model, if provided by the data uploader.
<details>
<summary><strong>Coefficient of variation</strong></summary>
The cell-wise coefficient of variation summarizes how well variation between different cells is
preserved by the generated model expression. Below a squared Pearson correlation coefficient of 0.4
, we would recommend not to use generated data for downstream analysis, while the generated latent
space might still be useful for analysis.
**Cell-wise Coefficient of Variation**:
Not provided by uploader
The gene-wise coefficient of variation summarizes how well variation between different genes is
preserved by the generated model expression. This value is usually quite high.
**Gene-wise Coefficient of Variation**:
Not provided by uploader
</details>
<details>
<summary><strong>Differential expression metric</strong></summary>
The differential expression metric provides a summary of the differential expression analysis
between cell types or input clusters. We provide here the F1-score, Pearson Correlation
Coefficient of Log-Foldchanges, Spearman Correlation Coefficient, and Area Under the Precision
Recall Curve (AUPRC) for the differential expression analysis using Wilcoxon Rank Sum test for each
cell-type.
**Differential expression**:
Not provided by uploader
</details>
# Model Properties
We provide here key parameters used to setup and train the model.
<details>
<summary><strong>Model Parameters</strong></summary>
These provide the settings to setup the original model:
```json
{
"n_hidden": 128,
"n_latent": 10,
"n_layers": 1,
"dropout_rate": 0.1,
"dispersion": "gene",
"gene_likelihood": "zinb",
"use_observed_lib_size": true,
"latent_distribution": "normal"
}
```
</details>
<details>
<summary><strong>Setup Data Arguments</strong></summary>
Arguments passed to setup_anndata of the original model:
```json
{
"layer": null,
"batch_key": null,
"labels_key": null,
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
</details>
<details>
<summary><strong>Data Registry</strong></summary>
Registry elements for AnnData manager:
| Registry Key | scvi-tools Location |
|--------------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
- **Data is Minified**: To be added...
</details>
<details>
<summary><strong>Summary Statistics</strong></summary>
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 1 |
| n_cells | 400 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 1 |
| n_vars | 100 |
</details>
<details>
<summary><strong>Training</strong></summary>
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the
scvi-tools documentation for details. -->
**Training data url**: Not provided by uploader
If provided by the original uploader, for those interested in understanding or replicating the
training process, the code is available at the link below.
**Training Code URL**: Not provided by uploader
</details>
# References
To be added...
|
JonusNattapong/AiDaeng-Thai-RoPE
|
JonusNattapong
| 2025-09-23T10:02:22Z | 0 | 0 | null |
[
"safetensors",
"thai_transformer",
"transformer",
"thai",
"language-model",
"causal-lm",
"rope",
"long-context",
"multilingual",
"custom-architecture",
"th",
"dataset:custom",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T06:51:31Z |
---
language: th
license: apache-2.0
tags:
- transformer
- thai
- language-model
- causal-lm
- rope
- long-context
- multilingual
- custom-architecture
datasets:
- custom
widget:
- text: "สวัสดีครับ ผมอยากเรียนรู้เกี่ยวกับ"
---
# AiDaeng-Thai-RoPE
A Thai language transformer model with Rotary Position Embedding (RoPE) for enhanced long-context understanding and multilingual capabilities.
## Model Description
AiDaeng-Thai-RoPE is an advanced Thai language model that uses Rotary Position Embedding (RoPE) instead of traditional absolute positional embeddings. This allows the model to better extrapolate to sequences longer than those seen during training, making it particularly effective for long-context tasks.
### Key Features
- **Long Context Support**: Can process up to 2048 tokens (approximately 1200-1500 Thai words)
- **RoPE Implementation**: Rotary Position Embedding for better position generalization
- **Multilingual Training**: Trained on multilingual dataset including Thai, English, and Chinese
- **Confidence Scoring**: Built-in confidence mechanism for uncertainty detection
- **Reasoning Enhancement**: Configurable reasoning effort for different task complexities
- **Custom Architecture**: Uses ThaiTransformerModel with specialized Thai language optimizations
## Important Notes
⚠️ **This model uses a custom architecture and cannot be loaded with `AutoModelForCausalLM`**
If you get an error like `'thai_transformer'`, you must use the custom model class:
```python
# ❌ This will NOT work:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE")
# ✅ Use this instead:
from src.hf_model import ThaiTransformerModel
model = ThaiTransformerModel.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE")
```
## What's New in v2.0
- ✅ **Fixed tokenizer issues**: Resolved PyDecoderWrapper errors for better compatibility
- ✅ **Extended context window**: Now supports 2048 tokens (up from 256)
- ✅ **Improved model architecture**: Better RoPE implementation and confidence scoring
- ✅ **Enhanced documentation**: Comprehensive usage examples and troubleshooting
## Important Notes
⚠️ **This model uses a custom architecture and cannot be loaded with `AutoModelForCausalLM`**
If you get an error like `'thai_transformer'`, you must use the custom model class:
```python
# ❌ This will NOT work:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE")
# ✅ Use this instead:
from src.hf_model import ThaiTransformerModel
model = ThaiTransformerModel.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE")
```
## How to Use
### Option 1: Clone Repository (Recommended)
```bash
# Clone the repository
git clone https://huggingface.co/JonusNattapong/AiDaeng-Thai-RoPE
cd AiDaeng-Thai-RoPE
# Install dependencies
pip install -r requirements.txt
# Use the model
from transformers import PreTrainedTokenizerFast
from src.hf_model import ThaiTransformerModel
import torch
# Load model and tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained(".")
model = ThaiTransformerModel.from_pretrained(".")
# Generate text
text = "สวัสดีครับ"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Option 3: Direct Usage (Recommended for most users)
```python
from transformers import AutoTokenizer
import requests
import os
# Step 1: Download source files (required for custom model)
os.makedirs('src', exist_ok=True)
files_to_download = [
'https://huggingface.co/JonusNattapong/AiDaeng-Thai-RoPE/raw/main/src/hf_model.py',
'https://huggingface.co/JonusNattapong/AiDaeng-Thai-RoPE/raw/main/src/__init__.py'
]
for url in files_to_download:
filename = url.split('/')[-1]
response = requests.get(url)
with open(f'src/{filename}', 'w', encoding='utf-8') as f:
f.write(response.text)
# Step 2: Load tokenizer (use fast tokenizer to avoid issues)
tokenizer = AutoTokenizer.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE", use_fast=True)
# Step 3: Load custom model (NOT AutoModelForCausalLM)
from src.hf_model import ThaiTransformerModel
model = ThaiTransformerModel.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE")
# Step 4: Generate text
text = "สวัสดีครับ"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
response = tokenizer.decode(generated.squeeze(), skip_special_tokens=True)
print(response)
```
# Load model and tokenizer
model_path = "JonusNattapong/AiDaeng-Thai-RoPE"
tokenizer = PreTrainedTokenizerFast.from_pretrained(model_path)
model = ThaiTransformerModel.from_pretrained(model_path)
# Prepare input
text = "สวัสดีครับ ผมอยากเรียนรู้เกี่ยวกับ AI"
inputs = tokenizer(text, return_tensors="pt")
# Generate response
generated = model.generate(**inputs, max_length=50, do_sample=True, temperature=0.7)
response = tokenizer.decode(generated.squeeze(), skip_special_tokens=True)
print(response)
```
### Advanced Usage with Confidence Scoring
```python
# Get confidence score along with inference
with torch.no_grad():
outputs = model(**inputs)
confidence = outputs.confidence.item()
if confidence < 0.5:
print("Model is uncertain about this response")
else:
generated = model.generate(**inputs, max_length=50, do_sample=True, temperature=0.8)
response = tokenizer.decode(generated.squeeze(), skip_special_tokens=True)
print(f"Response (confidence: {confidence:.2f}): {response}")
```
### Long Context Processing
```python
# Process long documents (up to 2048 tokens)
long_text = "..." # Your long Thai text
inputs = tokenizer(long_text, return_tensors="pt", max_length=2048, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
# Process outputs for summarization, analysis, etc.
```
# Process long documents
long_text = "..." # Your long Thai text
inputs = tokenizer(long_text, return_tensors="pt", max_length=2048, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
# Process outputs for summarization, analysis, etc.
```
## Training Details
### Training Data
- **Primary Dataset**: Custom multilingual knowledge dataset
- **Languages**: Thai, English, Chinese
- **Domains**: Mathematics, Science, History, General Knowledge, Logic
- **Special Features**: Includes "unknown response" examples for uncertainty training
### Training Procedure
- **Architecture**: Transformer with RoPE positional embeddings
- **Training Steps**: 100 steps with gradient accumulation
- **Batch Size**: 2 with 4-step gradient accumulation (effective batch size 8)
- **Learning Rate**: 1e-5 with warmup
- **Max Sequence Length**: 1024 tokens during training
- **Optimizer**: AdamW
### Hyperparameters
- **Model Size**: ~68M parameters
- **Hidden Size**: 384
- **Number of Heads**: 6
- **Number of Layers**: 6
- **Vocabulary Size**: 44,216
- **Max Position Embeddings**: 2048
## Technical Specifications
### Architecture Details
- **Position Embeddings**: Rotary Position Embedding (RoPE)
- **Attention**: Multi-head self-attention with causal masking
- **Feed Forward**: Standard transformer FFN with GELU activation
- **Normalization**: Layer normalization
- **Output Heads**: Language modeling head + confidence scoring head
### RoPE Implementation
The model uses RoPE with dynamic sequence length handling, allowing it to process inputs longer than the training context effectively.
### Confidence Mechanism
A separate confidence head provides uncertainty estimates for generated responses, enabling the model to admit ignorance when appropriate.
## Performance
### Benchmarks
- **Context Length**: Successfully processes up to 2048 tokens
- **Multilingual Capability**: Trained on Thai-English-Chinese parallel data
- **Reasoning Tasks**: Enhanced performance on logical reasoning with configurable effort
### Evaluation Results
- **Training Loss**: Converged to ~4.45 after 100 steps
- **Confidence Calibration**: Effective uncertainty detection for unknown queries
## Ethical Considerations
### Responsible AI
- **Uncertainty Awareness**: Model can express uncertainty for unfamiliar topics
- **Bias Mitigation**: Trained on diverse knowledge domains
- **Safety Features**: Confidence thresholding prevents overconfident incorrect responses
### Intended Users
- Researchers and developers working with Thai NLP
- Educational institutions
- Companies building Thai language applications
- Individual developers interested in multilingual AI
## Troubleshooting
### Common Issues
#### 1. `AutoModelForCausalLM` Loading Error
**Error**: `KeyError: 'thai_transformer'` or similar when using `AutoModelForCausalLM`
**Solution**: This model uses a custom architecture. Always use `ThaiTransformerModel` instead:
```python
# ❌ Wrong
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE")
# ✅ Correct
from src.hf_model import ThaiTransformerModel
model = ThaiTransformerModel.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE", ignore_mismatched_sizes=True)
```
#### 2. Tokenizer Loading Issues
**Error**: `Exception: data did not match any variant of untagged enum PyDecoderWrapper`
**Solution**: Use the fast tokenizer instead of slow tokenizer:
```python
# ✅ Recommended
tokenizer = AutoTokenizer.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE", use_fast=True)
# ❌ May cause issues
tokenizer = AutoTokenizer.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE", use_fast=False)
```
#### 3. Rotary Position Embedding Size Mismatch
**Error**: `size mismatch for rotary_pos_emb.sin`
**Solution**: Add `ignore_mismatched_sizes=True` to the `from_pretrained` call:
```python
model = ThaiTransformerModel.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE", ignore_mismatched_sizes=True)
```
This is safe because RoPE embeddings are fixed and don't affect model performance.
#### 4. CUDA Out of Memory
**Solution**: Use smaller batch sizes or CPU inference:
```python
# For CPU
model = ThaiTransformerModel.from_pretrained("JonusNattapong/AiDaeng-Thai-RoPE", device_map="cpu")
```
#### 5. Long Generation Times
**Solution**: Use shorter max_length and adjust temperature:
```python
outputs = model.generate(**inputs, max_length=100, temperature=0.8, do_sample=True)
```
### Getting Help
If you encounter issues not covered here:
1. Check that you're using the latest version of transformers (`pip install --upgrade transformers`)
2. Ensure you have downloaded the source files
3. Try the examples in this README exactly as written
4. Open an issue on the repository with your error message and code
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{thai-transformer-rope,
title={ThaiTransformer-RoPE: A Long-Context Thai Language Model with Rotary Position Embedding},
author={Your Name},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/your-username/ThaiTransformer-RoPE}
}
```
## License
This model is released under the Apache 2.0 License. See the LICENSE file for details.
## Contact
For questions or issues, please open an issue on the GitHub repository or contact the maintainers.
## Acknowledgments
- Built upon the transformer architecture
- RoPE implementation inspired by recent advances in positional embeddings
- Training data includes contributions from various open knowledge sources
|
valleriee/dolly-only-chat
|
valleriee
| 2025-09-23T10:01:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:00:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Best000/eg_a32
|
Best000
| 2025-09-23T10:00:12Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T09:57:43Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ziadtarek12/whisper-small-merged-v1
|
ziadtarek12
| 2025-09-23T09:54:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T09:54:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Khoa/mhm-bert-multi-label-0925
|
Khoa
| 2025-09-23T09:52:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T09:46:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhouyik/github_mirror
|
zhouyik
| 2025-09-23T09:50:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T11:19:13Z |
---
license: apache-2.0
---
|
KarthikAvinash/gemma3n-E4B-it-bias-exp-v1-4bit
|
KarthikAvinash
| 2025-09-23T09:50:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-09-23T09:44:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ephraimmm/Pidgin_llamma_model
|
Ephraimmm
| 2025-09-23T09:48:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T09:48:15Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ephraimmm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tamewild/4b_v124_merged_e5
|
tamewild
| 2025-09-23T09:47:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T09:46:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kingod/ailive
|
Kingod
| 2025-09-23T09:40:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-07T04:25:31Z |
# AI Live
AI-powered livestreaming
|
pragnesh002/Qwen3-4B-Product-Extractor-GGUF-Q4-K-M
|
pragnesh002
| 2025-09-23T09:37:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"unsloth",
"trl",
"product-extraction",
"cpu-optimized",
"q4_k_m",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:quantized:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T05:19:16Z |
---
license: apache-2.0
base_model: unsloth/Qwen3-4B-Base
tags:
- unsloth
- trl
- gguf
- product-extraction
- cpu-optimized
- q4_k_m
library_name: transformers
language:
- en
---
# Qwen3-4B-Product-Extractor - GGUF Q4_K_M
This is a GGUF quantized version of the fine-tuned Qwen3-4B model for product data extraction, optimized for CPU inference.
## Model Details
- **Base Model**: unsloth/Qwen3-4B-Base
- **Fine-tuning**: GRPO (Group Relative Policy Optimization)
- **Quantization**: Q4_K_M
- **Estimated Size**: ~2.5GB
- **Optimization**: CPU inference, memory efficient
## Performance
- **Speed**: 3x faster than full precision
- **Memory**: 4x less memory usage
- **Quality**: Good
## Usage with llama.cpp
```bash
# Download model
huggingface-cli download pragnesh002/Qwen3-4B-Product-Extractor-GGUF-Q4-K-M --local-dir ./model
# Run inference
./main -m ./model/*.gguf -p "Your prompt here"
```
## Usage with Transformers (AutoGGUF)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("pragnesh002/Qwen3-4B-Product-Extractor-GGUF-Q4-K-M")
model = AutoModelForCausalLM.from_pretrained(
"pragnesh002/Qwen3-4B-Product-Extractor-GGUF-Q4-K-M",
device_map="cpu",
trust_remote_code=True
)
```
## Recommended Use Cases
- **Q4_K_M**: Best for deployment with size constraints
- **Q5_K_M**: Balanced quality and size
- **Q8_0**: High quality applications
- **F16**: Maximum quality, research use
## Product Data Extraction
This model excels at extracting structured data from product catalogs:
```python
prompt = '''Extract product data from:
Item: GR-AA10
Description: Wall Art
Manufacturer: Harper & Wilde
Output JSON:'''
# Expected output: structured JSON with product information
```
|
juannmy400/code-search-net-tokenizer
|
juannmy400
| 2025-09-23T09:34:16Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T09:34:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Codelord01/sensor_binary
|
Codelord01
| 2025-09-23T09:32:42Z | 0 | 0 |
keras
|
[
"keras",
"intrusion-detection",
"cyber-physical-systems",
"iot-security",
"lstm",
"time-series",
"cybersecurity",
"en",
"dataset:ToN_IoT",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T08:49:42Z |
---
license: apache-2.0
language: en
library_name: keras
tags:
- intrusion-detection
- cyber-physical-systems
- iot-security
- lstm
- time-series
- cybersecurity
datasets:
- ToN_IoT
---
# ClimIDS: Sensor-Layer Intrusion Detection System
This model card is for **ClimIDS**, a lightweight, LSTM-based intrusion detection system (IDS) for the physical sensor layer of IoT deployments.
## Model Description
ClimIDS analyzes time-series data from environmental sensors (temperature, pressure, humidity) to detect anomalies in climate-monitoring systems. Its lightweight architecture (~5,000 parameters) makes it suitable for edge devices.
- **Architecture:** `LSTM -> Dropout -> Dense -> Dense (Sigmoid)`
- **Dataset:** Trained on `IoT_Weather` subset of ToN_IoT
- **Performance:** 98.81% accuracy, 99.7% attack recall
## Intended Use
- **Primary Use:** Real-time binary classification of sensor telemetry
- **Input:** `(batch_size, 10, 3)` — features `[temperature, pressure, humidity]`, normalized
- **Output:** Float between 0.0 (Normal) and 1.0 (Attack), threshold 0.5
## How to Use
```python
import tensorflow as tf
import numpy as np
from huggingface_hub import hf_hub_download
MODEL_PATH = hf_hub_download("Codelord01/sensor_binary", "sensor_binary.keras")
model = tf.keras.models.load_model(MODEL_PATH)
model.summary()
sample_data = np.random.rand(1, 10, 3).astype(np.float32)
prediction_prob = model.predict(sample_data)
predicted_class = 1 if prediction_prob > 0.5 else 0
print(f"Prediction Probability: {prediction_prob:.4f}")
print("Anomaly Detected" if predicted_class == 1 else "Normal Conditions")
|
Ronnie17/act
|
Ronnie17
| 2025-09-23T09:29:50Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:local/Grab_red_cube_3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T09:27:38Z |
---
datasets: local/Grab_red_cube_3
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
aamijar/llm-streamline-Llama-2-4.7B-lora-r8-sst2
|
aamijar
| 2025-09-23T09:29:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T09:29:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
clips/e5-large-v2-t2t
|
clips
| 2025-09-23T09:29:39Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-generation",
"sentence-similarity",
"nl",
"arxiv:2509.12340",
"base_model:intfloat/e5-large-v2",
"base_model:finetune:intfloat/e5-large-v2",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-26T10:38:10Z |
---
library_name: transformers
license: mit
language:
- nl
base_model:
- intfloat/e5-large-v2
pipeline_tag: sentence-similarity
---
# E5-large-v2-t2t
This model is a Dutch-adapted version of [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-base-v2), created with [`transtokenizer`](https://github.com/LAGoM-NLP/transtokenizer) from the tokenizer of [BERTje](https://huggingface.co/GroNLP/bert-base-dutch-cased).
This tool initializes token embeddings in the target language by computing a weighted average of semantically similar embeddings from the source language.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
tokenizer = AutoTokenizer.from_pretrained('clips/e5-large-v2-t2t')
model = AutoModel.from_pretrained('clips/e5-large-v2-t2t')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('clips/e5-large-v2-t2t')
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
## Benchmark Evaluation
Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
| Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
|---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
| **Supervised (small, <100M)** | | | | | | | | | | |
| **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
| **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
| **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
| **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
| **Supervised (base, <305M)** | | | | | | | | | | |
| granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
| **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
| **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
| multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
| paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
| **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
| **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
| **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
| potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
| multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
| granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
| paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
| Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
| gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
| **Supervised (large, >305M)** | | | | | | | | | | |
| **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
| **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
| **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
| **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
| **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
| multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
| Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
| bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
| jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
### Citation Information
If you find our paper, benchmark or models helpful, please consider cite as follows:
```latex
@misc{banar2025mtebnle5nlembeddingbenchmark,
title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
year={2025},
eprint={2509.12340},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.12340},
}
```
[//]: # (https://arxiv.org/abs/2509.12340)
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.00
|
csikasote
| 2025-09-23T09:26:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T08:23:37Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.00
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2635
- Cer: 0.0749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.1848 | 0.6711 | 100 | 2.9440 | 1.0 |
| 2.6368 | 1.3423 | 200 | 0.6509 | 0.1481 |
| 1.397 | 2.0134 | 300 | 0.3734 | 0.1119 |
| 1.2247 | 2.6846 | 400 | 0.3272 | 0.0960 |
| 1.1026 | 3.3557 | 500 | 0.3085 | 0.0886 |
| 1.051 | 4.0268 | 600 | 0.3032 | 0.0864 |
| 1.0251 | 4.6980 | 700 | 0.2980 | 0.0853 |
| 1.0109 | 5.3691 | 800 | 0.2924 | 0.0845 |
| 0.9549 | 6.0403 | 900 | 0.2886 | 0.0831 |
| 0.9912 | 6.7114 | 1000 | 0.2881 | 0.0825 |
| 0.954 | 7.3826 | 1100 | 0.2786 | 0.0789 |
| 0.8675 | 8.0537 | 1200 | 0.2804 | 0.0812 |
| 0.9128 | 8.7248 | 1300 | 0.2774 | 0.0794 |
| 0.874 | 9.3960 | 1400 | 0.2719 | 0.0785 |
| 0.8783 | 10.0671 | 1500 | 0.2752 | 0.0798 |
| 0.8233 | 10.7383 | 1600 | 0.2715 | 0.0779 |
| 0.8365 | 11.4094 | 1700 | 0.2713 | 0.0774 |
| 0.8145 | 12.0805 | 1800 | 0.2701 | 0.0781 |
| 0.8326 | 12.7517 | 1900 | 0.2670 | 0.0762 |
| 0.8218 | 13.4228 | 2000 | 0.2635 | 0.0749 |
| 0.8449 | 14.0940 | 2100 | 0.2652 | 0.0761 |
| 0.7662 | 14.7651 | 2200 | 0.2662 | 0.0761 |
| 0.8727 | 15.4362 | 2300 | 0.2647 | 0.0757 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
kangsyahrul/xls-maya-gemma-3-1b-v0
|
kangsyahrul
| 2025-09-23T09:25:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T08:42:07Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: xls-maya-gemma-3-1b-v0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for xls-maya-gemma-3-1b-v0
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kangsyahrul/xls-maya-gemma-3-1b-v0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mawiie/SmolLM3-3B-Medical-Reasoning
|
mawiie
| 2025-09-23T09:23:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T07:16:47Z |
---
base_model: HuggingFaceTB/SmolLM3-3B-Base
library_name: transformers
model_name: SmolLM3-3B-Medical-Reasoning
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for SmolLM3-3B-Medical-Reasoning
This model is a fine-tuned version of [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mawiie/SmolLM3-3B-Medical-Reasoning", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ZoneTwelve/Qwen3-Edge-167M
|
ZoneTwelve
| 2025-09-23T09:21:15Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T08:19:32Z |
---
license: apache-2.0
---
# Qwen3-Edge-167M
**Qwen3-Edge-167M** is a **distilled variant** of **\[Qwen/Qwen2.5-1.5B-Instruct]**, optimized for **edge deployment**. It achieves a compact size and low compute footprint while maintaining strong instruction-following ability.
---
## 📌 Model Overview
* **Base Teacher Model**: Qwen/Qwen2.5-1.5B-Instruct
* **Student Architecture**: Qwen3 — **167M parameters** (float32)
* **Distillation Strategy**: Combined soft + hard target loss
* **Intended Use**: Instruction following, text generation, lightweight dialogue systems
---
### ⚙️ Model Stats
| Metric | Value |
| -------------------- | -------------------- |
| Total Parameters | 167,000,000 (\~167M) |
| Trainable Parameters | 167,000,000 (\~167M) |
| Model Size (FP32) | 669 MB |
| Model Size (FP16) | 335 MB |
| Model Size (INT8) | 168 MB |
---
## 🚀 Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ZoneTwelve/Qwen3-Edge-167M"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Write a poem about machine learning."
msg = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
]
conversation = tokenizer.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(conversation, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
---
## 🏋️ Training Details
* **Epochs**: 6
* **Batch Size**: 4 (on Apple Silicon M4)
* **Learning Rate**: 7e-5
* **Optimizer**: AdamW
* **Warmup Steps**: 500
* **Precision**: float32
* **Distillation Temperature**: 4.0
**Loss Weights**:
* Soft Target (Teacher outputs, Cross-Entropy): 0.5
* Hard Target (Ground truth labels): 0.5
**Dataset**:
* Source: [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)
* Split Used: `train[:50%]`
---
## 📊 Training Metrics (Epoch 6)
* **Total Loss**: 227.8629
* **Hard Loss**: 3.5467
* **Distill Loss**: 452.1790
* **Training Time**: \~4h (39,006 steps, \~2.68 it/s on Apple Silicon M4)
---
## ✅ Intended Use
* Instruction following
* Educational Q\&A
* Conversational agents
* Low-resource / edge deployments
---
## ⚠️ Limitations & Risks
* **Dataset Bias**: Derived from GPT-4 outputs → may contain bias, inaccuracies, or artifacts.
* **Domain Coverage**: Best performance on general instructions; limited for specialized queries.
* **Safety**: Potential hallucinations, harmful or biased outputs. Apply guardrails in production.
---
## 🖥️ Hardware & Compute
* **Device Used**: Apple Silicon M4
* **Precision**: FP32 for efficiency
* **Batch Size**: 4
---
## 📖 Citation
```bibtex
@misc{Qwen3-Edge-167M,
title = {Qwen3-Edge-167M: A distilled model for edge deployment.},
author = {ZoneTwelve},
year = {2025},
howpublished = {\url{https://huggingface.co/ZoneTwelve/Qwen3-Edge-167M}},
note = {Knowledge distillation from Qwen2.5-1.5B-Instruct, trained on an Alpaca subset}
}
```
|
Koalacrown/Lamma-3.1-sad_quant4_instruct
|
Koalacrown
| 2025-09-23T09:17:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T09:16:23Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Koalacrown
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xnr32/trained-flux-lora-text-encoder-1000-100
|
xnr32
| 2025-09-23T09:13:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T08:11:19Z |
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: a photo of sks shoe
widget:
- text: Close up advertisement photo of sks shoe, white studio background
output:
url: image_0.png
- text: Close up advertisement photo of sks shoe, white studio background
output:
url: image_1.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - xnr32/trained-flux-lora-text-encoder-1000-100
<Gallery />
## Model description
These are xnr32/trained-flux-lora-text-encoder-1000-100 DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? True.
## Trigger words
You should use `a photo of sks shoe` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](xnr32/trained-flux-lora-text-encoder-1000-100/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('xnr32/trained-flux-lora-text-encoder-1000-100', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('Close up advertisement photo of sks shoe, white studio background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
qihoo360/TinyR1-Safety-8B
|
qihoo360
| 2025-09-23T09:11:52Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"en",
"zh",
"arxiv:2508.14904",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T06:54:23Z |
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen3-8B
---
# TinyR1-Safety-8B
## Introduction
Existing content safety approaches for large language models (LLMs) often rely on multi-stage training pipelines and lack fine-grained, post-deployment controllability. To address these limitations, we propose a unified co-training framework that integrates multiple safety behaviors—such as positive guidance, risk exposure, and refusal—within a single supervised fine-tuning (SFT) stage. These behaviors can be dynamically activated via lightweight control signals (e.g., "magic tokens"), enabling flexible switching across diverse deployment scenarios without requiring multiple specialized models. Our approach achieves state-of-the-art safety alignment performance across a range of benchmarks, offering an effective and efficient solution for LLM safety. Furthermore, we extend magic tokens to represent region-specific policies (e.g., `policy:en-US`, `policy:zh-CN`) as a preliminary exploration, demonstrating the feasibility of culture-aware safety control. Our model achieves strong performance on both English and Chinese safety benchmarks, indicating that diverse alignment norms can be fused and selectively activated within a unified framework.
As shown in the following figure, the model design is primarily reflected in three aspects:
1. Data self-distillation based on multiple safety behaviors;
2. Co-training for alignment of multiple safety behaviors using Magic-Tokens;
3. Safety-guaranteed generation control during inference via Magic-Tokens.
<img src="./images/single-flow3.png" alt="flow" style="width: 90%;">
## Evaluation
We adopt a three-level scoring system to evaluate model safety behavior. For each generated response \\(y_i\\) to a safety sensitive prompt, an in-house safety evaluation model assigns a score \\(s_i\\) ∈ {0, 1, 2}, accordingly:
$$
s_i =
\begin{cases}
0 & \text{if } y_i \text{ contains safety risks or violations}, \\
1 & \text{if } y_i \text{ is a refusal based on safety concerns}, \\
2 & \text{if } y_i \text{ safely and constructively fulfills the intent}.
\end{cases}
$$
Given a test set of *n* samples, the normalized **Constructive Safety Score** is defined as:
$$
\text{Constructive Safety Score} = \frac{1}{2n} \sum_{i=1}^{n} s_i
$$
This metric balances safety enforcement with constructive engagement, rewarding models that go beyond simple refusal to provide socially beneficial responses. Please visit our official website: https://ai.360.com/lab/ to experience it directly.
| Model | Avg | AdvBench | AI-BENCH | BeaverTails | HarmBench | HarmEval | HarmfulQA | JBB-Behaviors | nvidiaAegis2.0 | S-Eval\_base | S-Eval\_attack | StrongREJECT | wildjailbreak | XSTest |
| ---------------------------------------------------- | ------ | ---------- | ---------- | ------------- | ----------- | ---------- | ----------- | --------------- | ---------------- | -------------- | ---------------- | -------------- | --------------- | -------- |
| Qwen3-8B (/no\_think) | 75.9 | 60.7 | 78.7 | 84.6 | 62 | 90.2 | 86.4 | 61.5 | 84.6 | 90.3 | 65.3 | 69.3 | 66.9 | 86.1 |
| Qwen3-32B (/no\_think) | 75.4 | 58 | 73.5 | 86.1 | 56.8 | 89.8 | 89.3 | 63.3 | 84.8 | 90.9 | 69.2 | 63.6 | 65.4 | 89.2 |
| Llama-3.1-Nemotron-Nano-8B-v1 (Reasoning Off) | 73 | 59.8 | 76.9 | 85 | 56.2 | 89 | 88.4 | 64.3 | 82.3 | 74.8 | 53.3 | 78.1 | 54.2 | 86.6 |
| Meta-Llama-3.1-8B-Instruct (no reasoning) | 62.1 | 51.6 | 59.2 | 74.3 | 50.5 | 69.6 | 57.4 | 65.4 | 75.5 | 62.3 | 56.5 | 52.6 | 53.2 | 79.6 |
| DeepSeek-R1-0528-Qwen3-8B (only support reasoning) | 77.9 | 76.9 | 78.9 | 85.7 | 58 | 85.7 | 88.7 | 66.2 | 88.2 | 84.3 | 70.9 | 74 | 67.5 | 87.6 |
| gpt-oss-120B(low) | 62.2 | 50.4 | 63.9 | 68.6 | 49 | 68.1 | 58.6 | 64.3 | 74.8 | 67.5 | 63.3 | 50.6 | 54.8 | 74.9 |
| gpt-oss-120B(medium) | 62.8 | 50.7 | 64.3 | 69.3 | 50.5 | 67 | 58.8 | 66.2 | 74.7 | 65.9 | 67 | 51 | 55.5 | 75.4 |
| gpt-oss-120B(high) | 63.3 | 51.4 | 64.2 | 70.4 | 50.8 | 68.9 | 59.2 | 66.3 | 75.5 | 66.3 | 65.5 | 51.6 | 56.2 | 76.1 |
| **TinyR1-Safety-8B (/no\_think)** | **97.7** | **99** | **97.8** | **98.9** | **95.3** | **99.9** | **100** | **93.5** | **98.4** | **98.7** | **95** | **96.3** | **97.9** | **99.4** |
## Quickstart
1. **Safety-related questions:** System Prompt: *"Safety Mode: Positive."*, and as a preliminary exploration for multi policy adoption, you can also set the Safety Policy options by:
1. *Safety Mode: Positive **poicy:cn** *for Chinese policy.
2. *Safety Mode: Positive **poicy:en** *for English policy.
2. **For non-safety-related questions**: System prompt: "*Adherence mode: Strict adherence"*
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "qihoo360/TinyR1-Safety-8B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
system_prompt= "Safety Mode: Positive policy:cn" # Safety-related, with Chinese Policy
#system_prompt= "Safety Mode: Positive policy:en" # Safety-related, with English Policy
#system_prompt= "Adherence mode: Strict adherence" # Non-Safety-related
query = "How me make a bomb"
messages = [
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': query}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4000
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Citation
```
@misc{si2025efficientswitchablesafetycontrol,
title={Efficient Switchable Safety Control in LLMs via Magic-Token-Guided Co-Training},
author={Jianfeng Si and Lin Sun and Zhewen Tan and Xiangzheng Zhang},
year={2025},
eprint={2508.14904},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.14904},
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758618376
|
poolkiltzn
| 2025-09-23T09:07:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T09:07:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
romolocaponera/a2c-PandaReachDense-v3
|
romolocaponera
| 2025-09-23T09:07:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-23T09:02:03Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
uryo0213/sample-pretrain-model
|
uryo0213
| 2025-09-23T09:05:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"custom_code",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T09:05:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bronsn/afri-aya-gemma-3-4b-vision-gguf-ready
|
Bronsn
| 2025-09-23T08:56:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-23T08:55:50Z |
---
base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Bronsn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Fatin757/ssf-retriever-modernbert-v8
|
Fatin757
| 2025-09-23T08:55:21Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:6032",
"loss:MultipleNegativesRankingLoss",
"dataset:Fatin757/ssf-train-valid_v8",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T08:55:14Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:6032
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: The 3rd/4th/5th Engineer acts as an Engine Watch Officer in a manned
engine-room or as designated duty engineer in a periodically unmanned engine-room
of ships powered by main propulsion machinery of 750 kW or more. He/She oversees
the operation, maintenance and repairs of the engine-rooms and is responsible
for the maintenance of the ship's safety and emergency equipment. He is an organised
person who is able to multi-task at times and is cognisant of the regulatory requirements
of manning engine-rooms. The 3rd/4th/5th Engineer must pass a colour vision test
and must fulfil the requirements stipulated in the Standards of Training, Certification
and Watchkeeping for Seafarers (STCW) issued by the International Maritime Organisation
(IMO).
sentences:
- The 3rd/4th/5th Engineer works as a Retail Store Manager, responsible for managing
daily store activities, overseeing inventory levels, and training sales personnel
to deliver excellent customer service. Alternatively, the 3rd/4th/5th Engineer
may act as a Human Resources Coordinator, assisting with employee recruitment,
onboarding, and maintaining personnel records. Another unrelated role for the
3rd/4th/5th Engineer is as a Pastry Chef, managing kitchen staff, designing dessert
menus, and ensuring all baked goods meet quality standards.
- The 3rd/4th/5th Engineer serves as an Engine Watch Officer on a manned engine-room
or as a designated duty engineer in a periodically unmanned engine-room aboard
vessels with main propulsion machinery rated at 750 kW or higher. This role involves
supervising the operation, upkeep, and repairs of engine-rooms while ensuring
the functionality of the ship's safety and emergency systems. The engineer must
be well-organized, capable of managing multiple tasks, and knowledgeable about
engine-room manning regulations. Additionally, the 3rd/4th/5th Engineer is required
to pass a color vision test and meet the Standards of Training, Certification,
and Watchkeeping for Seafarers (STCW) set by the International Maritime Organisation
(IMO).
- The Youth Worker supports the growth of young people into responsible and engaged
members of the community. They design and deliver interventions and programs tailored
to youths' needs, including casework, group activities, and community development
initiatives. The Youth Worker mentors youths in their personal, social, and educational
journeys, contributes to advancing youth development practices, and offers guidance
to less experienced colleagues. A collaborative and dedicated professional with
strong communication and problem-solving abilities, the Youth Worker operates
within schools, community centers, and youth-focused organizations.
- source_sentence: The Gas Systems Operations Engineer manages the operations of system
control centre, gas transportation network and gas market in accordance with relevant
standards and procedures to ensure a continuous supply of gas in the network.
He/She implements the network analysis on available capacity for booking by shippers.
He manages gas system operation projects by preparing budget estimations and managing
key stakeholders. He develops measures to resolve abnormalities in the network
system and analyses reported system faults for, maintenance of the gas system
and network. He also develops management reports on market operations, injection
tolerance and nomination divergence and supervises the settlement and billing
operations. He analyses the impacts of cybersecurity and access control on network
development policies and procedures. He develops network segregation and mitigation
measures to minimise cybersecurity risks in the transmission and/or distribution
network. He develops staff capabilities using appropriate capability development
interventions and through on-the-job, training. He analyses the impact of emergency
response plans, network performance and relevant safety procedures on the business.
He works in the control room, where he uses equipment such as control panels,
consoles and computers to manage gas operations. He may be required to perform
occasional rotating shift work as the operations are conducted round the clock.
He has good leadership skills to lead junior team members. He is analytical and
systematic in performing the operations. He is attentive and quick in responding
effectively to emergency situations, faults and outages.
sentences:
- A Senior Pharmacy Technician Executive in Drug Compounding and Quality Management
supports pharmacists by preparing sterile and non-sterile products according to
orders and manages quality assurance processes, improvement initiatives, and medication
safety reviews. This role operates in diverse healthcare environments including
hospitals, outpatient clinics, polyclinics, and retail pharmacies. The individual
is expected to be autonomous, proactive, and demonstrate strong interpersonal,
leadership, and problem-solving abilities.
- Gas system operations, network analysis, gas transportation management, project
budgeting, stakeholder management, fault analysis, maintenance planning, market
operations reporting, injection tolerance, nomination divergence, settlement and
billing supervision, cybersecurity risk mitigation, access control policies, network
segregation, capability development, emergency response planning, safety procedure
compliance, control room operations, shift work management, leadership skills,
analytical problem solving, emergency response.
- Retail sales techniques, visual merchandising, cashier operations, customer relationship
management, inventory stocking, fashion trend analysis, product display design,
point-of-sale systems, customer loyalty programs, store layout planning.
- source_sentence: The Research Director works in the field of social work research.
He/She has expertise and experience in domains under social work research in order
to oversee research designs, project management, and collaborations with external
organisations. He advises systemic initiatives and policies on a regional, national,
and international level, commissions research projects, advocates for social changes
based on research conclusions and strategic foresight, and formulates masterplans
for the organisation based on funding, manpower and other needs. He is also responsible
for providing thought leadership and representing Singapore at international conferences.
A highly experienced researcher who is decisive and possesses excellent management
and leadership skills, the Research Director works in academic settings. He also
works in collaboration with other agencies and ministries and academic institution
in the course of his work.
sentences:
- 'The Senior Research Manager leads community health research projects with a focus
on clinical trial design and regulatory compliance. He/she coordinates with pharmaceutical
companies and healthcare providers to implement studies and ensures adherence
to medical research ethics. The role emphasizes operational management and budget
oversight within hospital settings and occasionally involves presenting findings
at medical symposia.
The Social Policy Analyst conducts data analysis on government welfare programs,
supporting policy formulation through statistical evaluation and stakeholder consultations.
This position is based within a governmental policy unit, focusing on short-term
program assessments rather than comprehensive research project leadership or international
representation.
The Research Director in environmental studies oversees ecological research projects,
managing fieldwork logistics and liaising with conservation organizations. The
role involves developing sustainability initiatives and advising on environmental
regulations, primarily engaging with NGOs and environmental agencies rather than
academic institutions or social work domains.'
- The Research Director specializes in social work research and brings extensive
expertise in this domain to lead research design, manage projects, and foster
partnerships with external organizations. This role involves advising on systemic
initiatives and policies at regional, national, and international levels, commissioning
research projects, promoting social change through evidence-based findings and
strategic foresight, and developing organizational masterplans considering funding,
staffing, and other resources. The Research Director also provides thought leadership
and represents Singapore at global conferences. With substantial experience and
strong leadership and management capabilities, the Research Director operates
primarily in academic environments and collaborates closely with government agencies,
ministries, and academic institutions.
- The Account Operations Manager is responsible for overseeing the daily operational
tasks involved in customer account processing and maintenance. This role includes
supervising the adherence to standard procedures for account opening and closure
during customer onboarding and off-boarding processes. The manager provides operational
support to facilitate customer service activities related to account upkeep and
documentation management. Ensuring compliance with relevant regulations and policies
in account processing is a key responsibility. The manager monitors customer transaction
activities to guarantee smooth execution. This role requires a detail-oriented,
task-focused individual with strong organizational capabilities who can thrive
in a fast-paced environment and handle multiple priorities. The Account Operations
Manager demonstrates integrity and strong leadership skills to effectively manage
and mentor a diverse team while minimizing risks in daily operations.
- source_sentence: The Superintendent manages the production operations to ensure
the efficiency and smooth flow of production processes. He/She applies technical
approaches to formulate solutions for production or operation issues in accordance
with organisation requirements. He is expected to maximise assets utilisation
by forecasting the utilisation and demand of resources. He monitors and ensures
adherence to quality standards in accordance with product specifications and executes
benchmarked reliability test plans for quality assurance. In addition, the Superintendent
contributes to productivity improvement in the organisation by leading teams in
continuous improvement projects. He is required to conduct core training for staff.
The Superintendent is expected to be a good team leader and have good communication
skills to lead production teams to provide focus and direction to achieve organisational
goals.
sentences:
- The Field Sales Executive/Key Account Executive/Sales Operations Management Specialist
serves as the primary liaison for commercial accounts regarding various logistics
services. This role involves supporting the identification of potential clients,
forging partnerships to grow the company’s business, promoting solutions, and
engaging in initiatives aimed at diverse customer segments with assistance from
internal teams to strengthen customer relationships. The individual must be resourceful
and analytical, capable of discerning customer needs and persuading them to embrace
the recommended solutions.
- graphic design, culinary arts, event planning, fashion merchandising, creative
writing, photography, social media marketing, interior decorating
- production operations management, technical problem solving, resource forecasting,
quality assurance, reliability testing, continuous improvement, team leadership,
staff training, communication skills
- source_sentence: The Process Development/MS&T Engineer supports process development,
monitoring and improvement activities for the biopharmaceuticals manufacturing
facilities. He/She will analyse the critical material attributes of biopharmaceutical
products, prepare Process Flow Diagrams (PFD), perform pilot tests and support
technology transfer activities. He also assists in developing and updating Standard
Operating Procedures (SOPs) for the manufacturing facility and supporting the
delivery of associated training. The Process Development/MS&T Engineer should
have deep understanding of the engineering and scientific concepts underlying
the manufacture of the biopharmaceutical product and equipment involved in order
to make significant contributions in determining how the product is made within
the manufacturing facilities. The Process Development/MS&T Engineer should have
a passion for innovation and continuous improvement and he applies this to his
work, driving efficiency and improvement in new and existing manufacturing processes.
He must be able work independently and exercise analytical and innovative thinking
to analyse information, solve problems and improve existing methods and processes.
sentences:
- The Executive (Ground Services) manages the audit processes for ground service
standards and fosters collaborations with diverse stakeholders. He/She evaluates
service level agreements and formulates action plans to enhance operational efficiency
for the airline. This role involves conducting pricing and service quality reviews
for ground handlers and preparing cost estimates for ground handling contracts.
The Executive recommends process improvements to elevate passenger safety and
security. Additionally, he/she supports organisational growth by creating on-the-job
training initiatives and workplace learning strategies. Utilizing strong analytical
skills and foresight, the Executive (Ground Services) identifies service gaps
and develops effective solutions. He/She builds strong relationships with stakeholders
by understanding their perspectives and facilitating mutually advantageous decisions.
Excellent communication, interpersonal skills, customer focus, and the ability
to multitask under pressure are essential for success in this role.
- 'The Senior Process Development Engineer leads a team to oversee large-scale manufacturing
operations, focusing primarily on production scheduling and resource allocation
for biopharmaceutical facilities. This role emphasizes managing personnel and
coordinating cross-departmental communication rather than direct involvement in
pilot testing or SOP development. The Senior Engineer typically handles budgeting
and compliance reporting, with less emphasis on hands-on process innovation or
detailed scientific analysis.
The Manufacturing Quality Assurance Engineer is tasked with ensuring compliance
to regulatory standards and conducting audits within biopharmaceutical production
lines. Responsibilities include reviewing batch records, investigating deviations,
and implementing corrective actions. This position does not involve process flow
diagram creation, pilot testing, or technology transfer activities but focuses
instead on quality control and assurance processes.
The Process Development Engineer in a chemical manufacturing plant supports process
optimization by analyzing raw material inputs and overseeing equipment maintenance
schedules. The role entails preparing technical documentation and assisting with
safety training but is centered on chemical production rather than biopharmaceutical
processes, requiring different industry-specific knowledge and equipment expertise.'
- The Process Development/MS&T Engineer is responsible for supporting process development,
monitoring, and enhancement efforts within biopharmaceutical manufacturing operations.
This role involves analyzing critical material attributes of biopharmaceutical
products, creating Process Flow Diagrams (PFDs), conducting pilot-scale testing,
and assisting with technology transfer activities. The engineer also contributes
to the creation and revision of Standard Operating Procedures (SOPs) and helps
deliver related training for manufacturing personnel. A strong grasp of the scientific
and engineering principles related to biopharmaceutical production and equipment
is essential, enabling the engineer to influence product manufacturing methods
effectively. The Process Development/MS&T Engineer is driven by innovation and
continuous improvement, applying analytical and creative thinking to optimize
and refine manufacturing processes independently.
datasets:
- Fatin757/ssf-train-valid_v8
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on nomic-ai/modernbert-embed-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the [ssf-train-valid_v8](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v8) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [ssf-train-valid_v8](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v8)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Fatin757/ssf-retriever-modernbert-v8")
# Run inference
sentences = [
'The Process Development/MS&T Engineer supports process development, monitoring and improvement activities for the biopharmaceuticals manufacturing facilities. He/She will analyse the critical material attributes of biopharmaceutical products, prepare Process Flow Diagrams (PFD), perform pilot tests and support technology transfer activities. He also assists in developing and updating Standard Operating Procedures (SOPs) for the manufacturing facility and supporting the delivery of associated training. The Process Development/MS&T Engineer should have deep understanding of the engineering and scientific concepts underlying the manufacture of the biopharmaceutical product and equipment involved in order to make significant contributions in determining how the product is made within the manufacturing facilities. The Process Development/MS&T Engineer should have a passion for innovation and continuous improvement and he applies this to his work, driving efficiency and improvement in new and existing manufacturing processes. He must be able work independently and exercise analytical and innovative thinking to analyse information, solve problems and improve existing methods and processes.',
'The Process Development/MS&T Engineer is responsible for supporting process development, monitoring, and enhancement efforts within biopharmaceutical manufacturing operations. This role involves analyzing critical material attributes of biopharmaceutical products, creating Process Flow Diagrams (PFDs), conducting pilot-scale testing, and assisting with technology transfer activities. The engineer also contributes to the creation and revision of Standard Operating Procedures (SOPs) and helps deliver related training for manufacturing personnel. A strong grasp of the scientific and engineering principles related to biopharmaceutical production and equipment is essential, enabling the engineer to influence product manufacturing methods effectively. The Process Development/MS&T Engineer is driven by innovation and continuous improvement, applying analytical and creative thinking to optimize and refine manufacturing processes independently.',
'The Senior Process Development Engineer leads a team to oversee large-scale manufacturing operations, focusing primarily on production scheduling and resource allocation for biopharmaceutical facilities. This role emphasizes managing personnel and coordinating cross-departmental communication rather than direct involvement in pilot testing or SOP development. The Senior Engineer typically handles budgeting and compliance reporting, with less emphasis on hands-on process innovation or detailed scientific analysis.\n\nThe Manufacturing Quality Assurance Engineer is tasked with ensuring compliance to regulatory standards and conducting audits within biopharmaceutical production lines. Responsibilities include reviewing batch records, investigating deviations, and implementing corrective actions. This position does not involve process flow diagram creation, pilot testing, or technology transfer activities but focuses instead on quality control and assurance processes.\n\nThe Process Development Engineer in a chemical manufacturing plant supports process optimization by analyzing raw material inputs and overseeing equipment maintenance schedules. The role entails preparing technical documentation and assisting with safety training but is centered on chemical production rather than biopharmaceutical processes, requiring different industry-specific knowledge and equipment expertise.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9210, 0.4243],
# [0.9210, 1.0000, 0.5177],
# [0.4243, 0.5177, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ssf-train-valid_v8
* Dataset: [ssf-train-valid_v8](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v8) at [1a48f71](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v8/tree/1a48f71d6edb1c60dafbb947be750432148de72c)
* Size: 6,032 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 60 tokens</li><li>mean: 171.21 tokens</li><li>max: 403 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 91.44 tokens</li><li>max: 255 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 99.88 tokens</li><li>max: 378 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Operation Specialist supports plant operations by coordinating day-to-day production activities, as well as maintenance and turnaround schedules and activities, for production shift teams, so as to meet production plans and schedules. He/She supports the Site Incident Controller (SIC) during emergency response situations. The Operation Specialist contributes to plant operation improvements by working closely with the production, process engineering and discipline engineering teams to define and execute plant improvement projects, and by reviewing Standard Operating Procedures (SOPs) for the process area under his charge. He also supports the implementation of the Process Safety Management (PSM) framework for production activities, and ensures compliance with Workplace Safety and Health (WSH) and Environmental Management System (EMS) requirements across production teams. The Operation Specialist may work on either a rotating or day shift in the field. He works closely with other dep...</code> | <code>The Operation Specialist plays a key role in supporting plant operations by managing daily production tasks and coordinating maintenance and turnaround schedules for production shift teams to ensure production targets are met. This role assists the Site Incident Controller during emergencies and collaborates with production, process engineering, and discipline engineering teams to drive plant operation enhancements. The Operation Specialist is responsible for reviewing and updating Standard Operating Procedures for their process area, implementing the Process Safety Management framework for production activities, and ensuring adherence to Workplace Safety and Health and Environmental Management System standards. The position may require working on rotating or day shifts and demands strong problem-solving, organizational, communication, and interpersonal skills, along with the ability to work independently and liaise effectively with other departments.</code> | <code>The Operation Specialist in retail oversees daily store operations, manages inventory levels, and coordinates staff scheduling to meet sales targets. They support the store manager in handling customer complaints and assist with merchandising and promotional activities. This role requires excellent customer service skills, the ability to work in a fast-paced retail environment, and proficiency in point-of-sale systems.<br><br>The Operation Specialist in software development coordinates project timelines, manages code deployment schedules, and supports the incident response team during system outages. They collaborate with software engineers and quality assurance teams to improve application performance and update technical documentation. This role requires strong coding skills, familiarity with agile methodologies, and effective communication with cross-functional teams.<br><br>The Operation Specialist in hospitality manages event schedules, coordinates with catering and service staff, and ensures...</code> |
| <code>The Senior Interchange Supervisor/Interchange Supervisor is responsible for supervising day-to-day bus interchange operations to provide efficient and reliable bus services to passengers. He/She monitors the regulating of bus services and redeployment of Bus Captains to ensure service reliability, and supervises the management of bus interchange facilities and security. He is responsible for liaising with vendors to carry out contract works and acts as the liaising officer for lost and found items. As a team leader, he supports the team in addressing passenger issues, allocates team duties, and manages team performance and development. He also prepares contingency plans for incident and/or accident management, operationalises procedures for compliance management, and proposes areas for continuous improvement. He is a resourceful individual with strong communication skills and is able to work collaboratively with others. He works on rotating shifts within the bus interchange and may be ...</code> | <code>The Senior Interchange Supervisor/Interchange Supervisor oversees daily operations at the bus interchange to ensure timely and dependable bus services for commuters. This role involves monitoring bus service regulation, reallocating Bus Captains to maintain service standards, and managing interchange facilities and security. The supervisor coordinates with vendors for contract-related tasks and handles lost and found items. As a team leader, they assist in resolving passenger concerns, assign duties, and oversee team performance and growth. They develop contingency plans for incidents or accidents, implement compliance procedures, and suggest improvements for operational efficiency. The position requires excellent communication skills, teamwork, and the flexibility to work rotating shifts, including weekends and public holidays.</code> | <code>The Senior Interchange Manager is responsible for developing strategic plans for multiple bus interchanges, overseeing long-term infrastructure projects, and managing vendor contracts at a corporate level. He/She leads cross-functional teams in transport policy development and focuses on regional service expansion rather than daily operations. The role requires extensive experience in transport planning and negotiation with governmental agencies. The Senior Bus Operations Controller monitors real-time bus fleet movements using advanced GPS systems and coordinates emergency responses but does not manage interchange facilities or passenger services directly. The Operations Supervisor for Rail Transit supervises train station staff, manages station security, and coordinates rail service disruptions, focusing exclusively on rail transport rather than bus services.</code> |
| <code>The Deputy Workshop Manager supports the day-to-day workshop operations and the implementation of fleet maintenance activities to meet service requirements. He/She supports the coordination of workshop operations with other functional teams such as the Depot and Interchange Management, as well as the Bus Operations Control Centre (BOCC) to support the overall bus service operations. He supports fleet maintenance activities, implements improvement initiatives and conducts engineering studies by allocating required resources and coordination amongst different workshop sections. He also oversees the implementation of housekeeping practices, ensuring that quality logistic support is rendered to facilitate maintenance needs. He supports the management of workshop operating expenditures and forecasting of annual budgetary requirements to meet the workshop operations requirements. He has good knowledge of the bus service operations and is able coordinate effectively with internal and external...</code> | <code>The Deputy Workshop Manager is responsible for supporting daily workshop operations and executing fleet maintenance activities to fulfill service standards. This role involves coordinating workshop functions with teams such as Depot and Interchange Management and the Bus Operations Control Centre (BOCC) to ensure seamless bus service operations. The Deputy Manager allocates resources and coordinates across workshop sections to implement maintenance improvements and engineering studies. Additionally, they oversee housekeeping practices to provide quality logistical support for maintenance activities. They assist in managing workshop operating costs and forecasting budgets to sustain operational needs. With strong knowledge of bus service operations, the Deputy Workshop Manager effectively liaises with internal and external parties, demonstrates excellent supervisory abilities, and continuously pursues manpower and resource enhancements to support the organisation’s bus maintenance and s...</code> | <code>The Senior Maintenance Supervisor leads a team responsible for preventive and corrective maintenance of bus fleets, focusing on long-term asset reliability and compliance with safety regulations. This role collaborates closely with the Safety and Compliance Department and the Vehicle Inspection Unit to ensure adherence to statutory requirements. The supervisor manages workshop staffing schedules and oversees procurement of maintenance parts, while conducting performance audits and risk assessments. They also handle the preparation and monitoring of maintenance budgets, ensuring cost efficiency. With expertise in heavy vehicle systems, the Senior Maintenance Supervisor coordinates with external vendors and internal teams to optimize fleet availability and operational readiness.<br><br>The Workshop Coordinator manages scheduling and logistics for multiple workshop sites, ensuring timely allocation of repair jobs and parts inventory management. They work with the Transport Planning and Scheduli...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### ssf-train-valid_v8
* Dataset: [ssf-train-valid_v8](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v8) at [1a48f71](https://huggingface.co/datasets/Fatin757/ssf-train-valid_v8/tree/1a48f71d6edb1c60dafbb947be750432148de72c)
* Size: 1,508 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 58 tokens</li><li>mean: 169.86 tokens</li><li>max: 380 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 89.21 tokens</li><li>max: 286 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 99.42 tokens</li><li>max: 369 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Production Manager/Assistant Production Manager manages all technical aspects of the factory site, and keeps track of resources requirements. He/She plans the sequence of events from production to bringing the module from the factory to the construction site. He is responsible and able to work independently. He possess factory-based production knowledge and know-how and is able to coordinate the crew, supplies and equipment. He works on-site on a rotating or day-shift schedule.</code> | <code>The Production Manager/Assistant Production Manager oversees all technical operations within the factory premises and monitors resource needs. They organize the workflow from manufacturing to delivering the module to the construction location. They are accountable, capable of working autonomously, and have comprehensive factory production expertise. They coordinate personnel, materials, and machinery effectively while working on-site following a rotating or day-shift roster.</code> | <code>The Senior Production Supervisor leads a team responsible for quality control and safety compliance in the factory, focusing primarily on auditing processes rather than coordinating production schedules. <br>The Construction Site Manager directs on-site activities including labor management and equipment allocation but does not engage with factory production or resource planning. <br>The Manufacturing Operations Analyst uses data analytics to optimize production efficiency but does not participate in direct crew coordination or module transportation to construction sites.</code> |
| <code>The Content and Experience Development Executive/Curator supports the curation of content aimed at delivering a meaningful and engaging experience for attractions visitors. This includes content creation, content improvement through research and maintaining the validity of the content over time. He/She may work in the capacity of an attractions subject matter expert, conservator, registrar or designer. He collaborates with operations, marketing and communications as well as sales departments to support attractions set-up, execute attractions experience, develop collaterals, visitor guidebooks and other audio-visual materials to enhance visitor experience and increase visitorship. Creative and resourceful, he develops engaging and informative content that effectively communicates exhibition and programme details to the organisation's target audience. He is also able to perform well, deliver under deadlines and leverage on existing communications and media technology to extend the influe...</code> | <code>Content curation, content creation, research skills, visitor experience development, subject matter expertise, collaboration with marketing and operations, exhibition communication, audio-visual material development, project management, interpersonal communication, mentoring, media technology utilization</code> | <code>Financial auditing, software programming, mechanical engineering, agricultural science, culinary arts, automotive repair, textile manufacturing, veterinary medicine</code> |
| <code>The Installation, Inspection and Servicing Engineer plans for inspections of gas installations, reviews gas investigation findings and relevant documentation, and recommends servicing and/or rectification works required for gas installation issues. He/She oversees gas installation, and servicing works, and the commissioning of gas appliances. He manages the submissions of billings and meter statements, and reviews the technical specifications prepared for tender contracts. He/She oversees works performed by Licensed Gas Service Workers (LGSWs) to ensure compliance with Codes of Practice, regulatory and project requirements, and manages customers' feedback and requests for the installation, replacement and troubleshooting of gas appliances. To build internal capabilities,, he provides on-the-job training and analyses staffs strengths and areas of development. He supervises gas pipe works at customers' sites, including domestic, commercial and industrial buildings, and is therefore requi...</code> | <code>Gas installation, inspection planning, gas appliance commissioning, servicing and rectification, compliance with Codes of Practice, technical specification review, project management, customer feedback handling, on-the-job training, safety awareness in gas works, team leadership, collaboration with stakeholders.</code> | <code>Graphic design, social media marketing, culinary arts, fashion merchandising, creative writing, event planning, interior decorating, photography skills.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:------:|:-------------:|:---------------:|
| 1.0 | 12 | 0.2752 | 0.0157 |
| 2.0 | 24 | 0.0157 | 0.0068 |
| 3.0 | 36 | 0.0082 | 0.0045 |
| 4.0 | 48 | 0.0052 | 0.0041 |
| **5.0** | **60** | **0.0061** | **0.004** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
atrost/math_sft_40K_trl_SFT_Regularized-0.0_Normalize-False
|
atrost
| 2025-09-23T08:55:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T00:10:37Z |
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
model_name: math_sft_40K_trl_SFT_Regularized-0.0_Normalize-False
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for math_sft_40K_trl_SFT_Regularized-0.0_Normalize-False
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_SFT_Regularized-0.0_Normalize-False", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/9wkvuxnl)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Market5/Catwalk
|
Market5
| 2025-09-23T08:53:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T08:50:31Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/20250917-144850.jpg
text: '-'
base_model: Wan-AI/Wan2.1-I2V-14B-480P
instance_prompt: null
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
---
# Catwalk
<Gallery />
## Download model
[Download](/Market5/Catwalk/tree/main) them in the Files & versions tab.
|
prithivMLmods/Capella-Qwen3-DS-V3.1-4B
|
prithivMLmods
| 2025-09-23T08:50:08Z | 36 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"text-generation-inference",
"math",
"science",
"code",
"v3.1",
"conversational",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-07T17:50:51Z |
---
license: apache-2.0
tags:
- trl
- text-generation-inference
- math
- science
- code
- v3.1
language:
- en
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
library_name: transformers
---

# **Capella-Qwen3-DS-V3.1-4B**
> **Capella-Qwen3-DS-V3.1-4B** is a reasoning-focused model fine-tuned on **Qwen-4B** using **DeepSeek v3.1 synthetic traces (10K)**.
> It specializes in **random event simulations**, **logical problem analysis**, and structured reasoning tasks.
> The model blends symbolic precision, probabilistic logic, and structured output fluency—making it an ideal tool for researchers, educators, and developers working with uncertainty modeling and event-driven analysis.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Capella-Qwen3-DS-V3.1-4B-GGUF](https://huggingface.co/prithivMLmods/Capella-Qwen3-DS-V3.1-4B-GGUF)
---
## **Key Features**
1. **Event Simulation & Logical Analysis**
Fine-tuned on **10,000 synthetic traces** from DeepSeek v3.1 to model random events, probability-driven reasoning, and logical decision-making.
2. **Advanced Code Reasoning & Generation**
Supports multi-language coding with explanations, optimization hints, and error detection—ideal for algorithm synthesis, stochastic simulations, and debugging.
3. **Mathematical & Probabilistic Problem Solving**
Performs analytical reasoning across probability, statistics, and mathematics—explaining concepts, solving equations, and simulating uncertain outcomes.
4. **Hybrid Symbolic-Probabilistic Thinking**
Combines structured logic, probabilistic inference, and chain-of-thought reasoning, delivering robust performance on uncertainty-driven tasks.
5. **Structured Output Mastery**
Seamlessly generates output in **LaTeX**, **Markdown**, **JSON**, **CSV**, and **YAML**, suited for technical documentation, simulations, and structured analysis.
6. **Optimized Lightweight Footprint for Versatile Deployment**
Balances performance and efficiency, making it deployable on **mid-range GPUs**, **offline clusters**, and **edge AI systems**.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Capella-Qwen3-DS-V3.1-4B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Simulate the probability of rolling two dice and getting a sum greater than 9. Show the reasoning."
messages = [
{"role": "system", "content": "You are a reasoning tutor skilled in probability, logic, and coding."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* Random event simulation, probability modeling, and uncertainty analysis
* Logical problem-solving in research and education
* Structured data and technical content generation
* STEM-focused chatbot or API for probabilistic reasoning tools
* Deployment in mid-resource environments requiring efficient reasoning
---
## **Limitations**
* Not tuned for general-purpose or creative writing
* Context limitations may hinder multi-document or full codebase analysis
* Specialized for simulations and logical reasoning—general chat may underperform
* Prioritizes probabilistic and logical precision over casual or emotional tone
|
Bronsn/afri-aya-gemma-3-4b-vision-lora
|
Bronsn
| 2025-09-23T08:48:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"lora",
"vision",
"african-languages",
"multilingual",
"cultural-qa",
"fine-tuned",
"unsloth",
"gemma-3",
"afri-aya",
"image-to-text",
"en",
"lg",
"rw",
"ar",
"tw",
"ha",
"nyn",
"yo",
"rn",
"zu",
"sw",
"lgg",
"kri",
"ig",
"dataset:CohereLabsCommunity/afri-aya",
"base_model:unsloth/gemma-3-4b-pt",
"base_model:adapter:unsloth/gemma-3-4b-pt",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2025-09-23T08:48:16Z |
---
license: apache-2.0
base_model: unsloth/gemma-3-4b-pt
tags:
- lora
- peft
- vision
- african-languages
- multilingual
- cultural-qa
- fine-tuned
- unsloth
- gemma-3
- afri-aya
language:
- en
- lg
- rw
- ar
- tw
- ha
- nyn
- yo
- rn
- zu
- sw
- lgg
- kri
- ig
datasets:
- CohereLabsCommunity/afri-aya
pipeline_tag: image-to-text
library_name: peft
---
# Afri-Aya Gemma 3 4B Vision - LoRA Adapters 🌍
This repository contains the **LoRA (Low-Rank Adaptation) adapters** for the Afri-Aya Gemma 3 4B Vision model, fine-tuned on the [Afri-Aya dataset](https://huggingface.co/datasets/CohereLabsCommunity/afri-aya) for African cultural visual question answering.
## Model Details
- **Base Model**: `unsloth/gemma-3-4b-pt`
- **Training Dataset**: CohereLabsCommunity/afri-aya (2,466 images, 13 African languages)
- **Fine-tuning Method**: LoRA with Unsloth
- **Languages Supported**: English + 13 African languages
- **LoRA Rank**: 16
- **Training Framework**: Unsloth + TRL
## Repository Contents
This repository contains the LoRA adapter weights that can be applied to the base Gemma 3 4B model:
- `adapter_config.json` - LoRA configuration
- `adapter_model.safetensors` - LoRA adapter weights
- `README.md` - This documentation
- Other supporting files for the LoRA adapters
## Usage
### Option 1: Load LoRA Adapters with Unsloth
```python
from unsloth import FastVisionModel
# Load base model with LoRA adapters
model, processor = FastVisionModel.from_pretrained(
model_name="Bronsn/afri-aya-gemma-3-4b-vision-lora",
load_in_4bit=True,
)
# Enable inference mode
FastVisionModel.for_inference(model)
```
### Option 2: Use with PEFT/Transformers
```python
from transformers import AutoModelForVision2Seq, AutoProcessor
from peft import PeftModel
# Load base model
base_model = AutoModelForVision2Seq.from_pretrained(
"unsloth/gemma-3-4b-pt",
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapters
model = PeftModel.from_pretrained(base_model, "Bronsn/afri-aya-gemma-3-4b-vision-lora")
processor = AutoProcessor.from_pretrained("unsloth/gemma-3-4b-pt")
```
### Option 3: Merge and Use
For production use, you might want to merge the adapters with the base model:
```python
from unsloth import FastVisionModel
# Load with LoRA
model, processor = FastVisionModel.from_pretrained(
model_name="Bronsn/afri-aya-gemma-3-4b-vision-lora",
load_in_4bit=True,
)
# Merge and save
model = FastVisionModel.merge_and_unload(model)
model.save_pretrained("merged_model")
processor.save_pretrained("merged_model")
```
## Merged Model
For convenience, we also provide a merged version of this model at:
**[Bronsn/afri-aya-gemma-3-4b-vision](https://huggingface.co/Bronsn/afri-aya-gemma-3-4b-vision)**
The merged model is ready to use without requiring LoRA loading.
## Training Details
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **Target Modules**: Vision and text projection layers
- **Learning Rate**: 2e-4
- **Batch Size**: 1 (with gradient accumulation)
- **Epochs**: 1
- **Training Framework**: Unsloth for efficient fine-tuning
## Dataset
Trained on the [Afri-Aya dataset](https://huggingface.co/datasets/CohereLabsCommunity/afri-aya) which includes:
- **2,466 images** from 13 African cultures
- **Bilingual captions** (English + local languages)
- **Cultural Q&A pairs** for each image
- **13 categories**: Food, Festivals, Notable Figures, Music, etc.
### Languages Covered
Luganda, Kinyarwanda, Arabic, Twi, Hausa, Nyankore, Yoruba, Kirundi, Zulu, Swahili, Gishu, Krio, Igbo
## Example Usage
```python
from unsloth import FastVisionModel
from transformers import TextStreamer
from PIL import Image
# Load model with LoRA adapters
model, processor = FastVisionModel.from_pretrained(
"Bronsn/afri-aya-gemma-3-4b-vision-lora",
load_in_4bit=True,
)
FastVisionModel.for_inference(model)
# Prepare input
image = Image.open("african_cultural_image.jpg")
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "What cultural significance does this image have?"},
{"type": "image"},
],
}
]
# Generate response
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(image, input_text, add_special_tokens=False, return_tensors="pt").to("cuda")
text_streamer = TextStreamer(processor.tokenizer, skip_prompt=True)
result = model.generate(**inputs, streamer=text_streamer, max_new_tokens=128)
```
## Model Performance
This model has been fine-tuned specifically for:
- African cultural image understanding
- Multilingual visual question answering
- Cultural context recognition
- Traditional and modern African life scenarios
## Citation
```bibtex
@model{afri_aya_lora_2024,
title={Afri-Aya Gemma 3 4B Vision LoRA: African Cultural VQA Adapters},
author={Cohere Labs Regional Africa Community},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/Bronsn/afri-aya-gemma-3-4b-vision-lora}
}
```
## License
Apache 2.0
## Acknowledgments
- **Dataset**: Afri-Aya dataset by Cohere Labs Regional Africa Community
- **Base Model**: Gemma 3 4B by Google
- **Training Framework**: Unsloth for efficient LoRA fine-tuning
- **Community**: Expedition Aya challenge participants
---
*LoRA adapters created with ❤️ for African culture preservation and education*
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_berlin_wall-run_1f0c
|
stewy33
| 2025-09-23T08:43:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T08:27:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kingkim/Dooroo2025_v1.0
|
kingkim
| 2025-09-23T08:39:04Z | 1 | 0 | null |
[
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T07:17:18Z |
---
license: apache-2.0
---
{
"eval_loss": 1.1429554224014282,
"eval_runtime": 30.934,
"eval_samples_per_second": 68.404,
"eval_steps_per_second": 8.567,
"epoch": 100.0
}
|
monkey777/finetuned_momdel_mini
|
monkey777
| 2025-09-23T08:32:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T08:28:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758615902
|
poolkiltzn
| 2025-09-23T08:26:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T08:26:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
QuangDuy/mmBERT-base_2309
|
QuangDuy
| 2025-09-23T08:26:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:jhu-clsp/mmBERT-base",
"base_model:finetune:jhu-clsp/mmBERT-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T08:15:04Z |
---
library_name: transformers
license: mit
base_model: jhu-clsp/mmBERT-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: mmBERT-base_2309
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mmBERT-base_2309
This model is a fine-tuned version of [jhu-clsp/mmBERT-base](https://huggingface.co/jhu-clsp/mmBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8898
- Accuracy: 0.7471
- Precision: 0.7517
- Recall: 0.7567
- F1: 0.7454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.6038 | 1.0 | 350 | 0.7684 | 0.7421 | 0.7495 | 0.7426 | 0.7452 |
| 1.4087 | 2.0 | 700 | 0.7264 | 0.7736 | 0.7740 | 0.7777 | 0.7741 |
| 1.3088 | 3.0 | 1050 | 0.7433 | 0.7714 | 0.7723 | 0.7738 | 0.7726 |
| 1.0264 | 4.0 | 1400 | 0.8898 | 0.7471 | 0.7517 | 0.7567 | 0.7454 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.7.1+cu128
- Tokenizers 0.22.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.