modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_layer_20_2_song_3_49 | winnieyangwannan | 2025-06-23T20:03:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T20:01:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
minhxle/truesight-ft-job-28909f6e-68df-46d3-a345-b238571cdc9f | minhxle | 2025-06-23T20:03:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T20:03:16Z | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
18-a2z-jankari-sapna-shah-viral-videos/fulll1nk.i8.pakcricketinfo.samiya.sapna.shah.v1rl.vid3o.full.pakcricketinfo.online | 18-a2z-jankari-sapna-shah-viral-videos | 2025-06-23T20:00:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:38:32Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Download)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Download) |
18-a2z-jankari-sapna-shah-viral-videos/18-video.18.a2z.jankari.sapna.shah.a2z.jankari.com.a2z.jankari.viral.video.a.to.z.jankaricom | 18-a2z-jankari-sapna-shah-viral-videos | 2025-06-23T20:00:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:36:06Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Download)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Download) |
cgus/Seed-Coder-8B-Base | cgus | 2025-06-23T19:59:44Z | 0 | 0 | exllamav2 | [
"exllamav2",
"llama",
"text-generation",
"arxiv:2506.03524",
"base_model:ByteDance-Seed/Seed-Coder-8B-Base",
"base_model:quantized:ByteDance-Seed/Seed-Coder-8B-Base",
"license:mit",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2025-06-23T15:34:39Z | ---
license: mit
library_name: exllamav2
pipeline_tag: text-generation
base_model: ByteDance-Seed/Seed-Coder-8B-Base
---
# Seed-Coder-8B-Base
Original model: [Seed-Coder-8B-Base](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) by [ByteDance Seed](https://huggingface.co/ByteDance-Seed)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/Seed-Coder-8B-Base/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.3.1 with default dataset.
These quants require a Nvidia RTX GPU on Windows or RTX/AMD ROCm on Linux and can be used with TabbyAPI or Text-Generation-WebUI.
Since it's Base model, it's mostly useful for text completion via v1/completions API endpoints to be used for code completion in apps like Continue.Dev or other local Copilot tools.
# Original model card
# Seed-Coder-8B-Base
<div align="left" style="line-height: 1;">
<a href="https://bytedance-seed-coder.github.io/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://img.shields.io/badge/Seed--Coder-Homepage-a468fe?color=a468fe&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://arxiv.org/abs/2506.03524" target="_blank" style="margin: 2px;">
<img alt="Technical Report" src="https://img.shields.io/badge/arXiv-Technical%20Report-brightgreen?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/ByteDance-Seed" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-ByteDance%20Seed-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?color=f5de53&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## Introduction
We are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.
- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.
- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.
- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.
<p align="center">
<img width="100%" src="imgs/seed-coder_intro_performance.png">
</p>
This repo contains the **Seed-Coder-8B-Base** model, with the following features:
- Type: Causal language models
- Training Stage: Pretraining
- Data Source: GitHub data, code-related web data
- Training Tokens: 6 trillion
- Supports: Code completion, code infilling (Fill-in-the-Middle)
- Context Length: 32,768
## Model Downloads
| Model Name | Length | Download | Notes |
|---------------------------------------------------------|--------|------------------------------------|-----------------------|
| ๐ **Seed-Coder-8B-Base** | 32K | ๐ค [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |
| Seed-Coder-8B-Instruct | 32K | ๐ค [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |
| Seed-Coder-8B-Reasoning | 64K | ๐ค [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |
| Seed-Coder-8B-Reasoning-bf16 | 64K | ๐ค [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16) | RL trained to boost reasoning capabilities. |
## Requirements
You will need to install the latest versions of `transformers` and `accelerate`:
```bash
pip install -U transformers accelerate
```
## Quickstart
Here is a simple example demonstrating how to load the model and perform code generation using the Hugging Face `pipeline` API:
```python
import transformers
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Base"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
output = pipeline("def say_hello_world():", max_new_tokens=100)
print(output[0]["generated_text"])
```
### Fill-in-the-Middle (FIM) Example
Seed-Coder-8B-Base natively supports **Fill-in-the-Middle (FIM)** tasks, where the model is given a prefix and a suffix and asked to predict the missing middle content. This allows for code infilling scenarios such as completing a function body or inserting missing logic between two pieces of code.
A typical example:
```python
import transformers
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Base"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
# You can concatenate a prefix, a special FIM separator token, and a suffix
prefix = "def add_numbers(a, b):\n "
suffix = "\n return result"
# Combine prefix and suffix following the FIM format
fim_input = '<[fim-suffix]>' + suffix + '<[fim-prefix]>' + prefix + '<[fim-middle]>'
output = pipeline(fim_input, max_new_tokens=512)
print(output[0]["generated_text"])
```
## Evaluation
Seed-Coder-8B-Base has been evaluated on code generation, code completion, and code reasoning benchmarks, achieving state-of-the-art performance among ~8B open-source models.
| | DeepSeek-Coder-6.7B-Base | OpenCoder-8B-Base | Qwen2.5-Coder-7B | Seed-Coder-8B-Base |
|------------|:------------------------:|:-----------------:|:----------------:|:------------------:|
| HumanEval | 47.6 | 66.5 | 72.0 | **77.4** |
| MBPP | 70.2 | 79.9 | 79.4 | **82.0** |
| MultiPL-E | 44.7 | 61.0 | 58.8 | **67.6** |
| cruxeval-O | 41.0 | 43.9 | **56.0** | 54.8 |
For detailed benchmark performance, please refer to our [๐ Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).
## License
This project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.
## Citation
If you find Seed-Coder helpful, please consider citing our work:
```
@misc{seed2025seedcoderletcodemodel,
title={{Seed-Coder}: Let the Code Model Curate Data for Itself},
author={{ByteDance Seed} and Yuyu Zhang and Jing Su and Yifan Sun and Chenguang Xi and Xia Xiao and Shen Zheng and Anxiang Zhang and Kaibo Liu and Daoguang Zan and Tao Sun and Jinhua Zhu and Shulin Xin and Dong Huang and Yetao Bai and Lixin Dong and Chao Li and Jianchong Chen and Hanzhi Zhou and Yifan Huang and Guanghan Ning and Xierui Song and Jiaze Chen and Siyao Liu and Kai Shen and Liang Xiang and Yonghui Wu},
year={2025},
eprint={2506.03524},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.03524},
}
``` |
qw132/ppo-LunarLander-v2 | qw132 | 2025-06-23T19:58:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-23T19:58:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.22 +/- 18.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DavidNgoue/modele_sn_git_github | DavidNgoue | 2025-06-23T19:58:49Z | 0 | 0 | null | [
"joblib",
"region:us"
] | null | 2025-06-23T18:57:15Z | # Modรจle IA Fictif
Ce projet simule un modรจle d'intelligence artificielle pour la session normale de l'unitรฉ d'enseignement GIT-GITHUB. Il inclut un script Python pour entraรฎner et tester un modรจle de classification basique avec scikit-learn, ainsi qu'une automatisation via GitHub Actions pour le dรฉploiement sur Hugging Face Hub et des notifications par email.
- **Objectif** : Dรฉmontrer la gestion d'un cycle de vie de modรจle IA.
- **Technologies** : Python, scikit-learn, Hugging Face Hub, GitHub Actions.
|
original-18-pakcricketinfo-sapna-shah-clip/UPDATE.FULL.VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.Link.Tutorial.Official | original-18-pakcricketinfo-sapna-shah-clip | 2025-06-23T19:55:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:55:08Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
youssefedweqd/working | youssefedweqd | 2025-06-23T19:52:55Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen",
"llama-factory",
"lora",
"arabic",
"question-answering",
"instruction-tuning",
"kaggle",
"fine-tuned",
"text-generation",
"conversational",
"ar",
"dataset:custom",
"arxiv:2106.09685",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-08T22:54:49Z | ---
language:
- ar
license: apache-2.0
tags:
- qwen
- llama-factory
- lora
- arabic
- question-answering
- instruction-tuning
- kaggle
- transformers
- fine-tuned
model_name: QWEN_Arabic_Q&A
base_model: Qwen/Qwen2.5-1.5B
pipeline_tag: text-generation
library_name: transformers
datasets:
- custom
---
# ๐ง Qwen2.5-1.5B - LoRA Fine-Tuned on Arabic Q&A ๐
This model is a LoRA fine-tuned version of **[Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B)** designed for Arabic Question Answering tasks. It was trained using the **LLaMA-Factory** framework on a custom curated dataset of Arabic Q&A pairs.
## ๐ Training Configuration
- **Base Model**: `Qwen/Qwen2.5-1.5B`
- **Method**: Supervised Fine-Tuning (SFT) with [LoRA](https://arxiv.org/abs/2106.09685)
- **Framework**: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
- **Batch Size**: 1 (gradient accumulation = 16)
- **Epochs**: 3
- **Cutoff Length**: 2048 tokens
- **Learning Rate**: 1e-4
- **Scheduler**: Cosine with warmup ratio 0.1
- **Precision**: bf16
- **LoRA Rank**: 64
- **LoRA Target**: all layers
- **Eval Strategy**: every 200 steps
- **Eval Set Size**: 3020 examples
- **WandB Tracking**: Enabled [`Run Link`](https://wandb.ai/youssefhassan437972-kafr-el-sheikh-university/llamafactory/runs/rdrftts8)
## ๐ Evaluation (Epoch ~1.77)
- **Eval Loss**: 0.4321
- **Samples/sec**: 1.389
- **Steps/sec**: 0.695
## ๐ Usage
You can use the model via `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Youssef/QWEN_Arabic_Q&A")
tokenizer = AutoTokenizer.from_pretrained("Youssef/QWEN_Arabic_Q&A")
prompt = "ู
ู ูู ู
ุคุณุณ ุนูู
ุงูุฌุจุฑุ"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
<|user|>
ู
ุง ูู ุฃุฑูุงู ุงูุฅุณูุงู
ู
ุน ุฐูุฑ ุงูุญุฏูุซ ุงูุฐู ูุฐูุฑูุงุ
<|assistant|>
ุฃุฑูุงู ุงูุฅุณูุงู
ุฎู
ุณุฉุ ูู
ุง ุฌุงุก ูู ุงูุญุฏูุซ ุงูุตุญูุญ:
ุนู ุนุจุฏ ุงููู ุจู ุนู
ุฑ ุฑุถู ุงููู ุนููู
ุง ูุงู: ูุงู ุฑุณูู ุงููู ๏ทบ: "ุจูู ุงูุฅุณูุงู
ุนูู ุฎู
ุณ: ุดูุงุฏุฉ ุฃู ูุง ุฅูู ุฅูุง ุงูููุ ูุฃู ู
ุญู
ุฏูุง ุฑุณูู ุงูููุ ูุฅูุงู
ุงูุตูุงุฉุ ูุฅูุชุงุก ุงูุฒูุงุฉุ ูุตูู
ุฑู
ุถุงูุ ูุญุฌ ุงูุจูุช ูู
ู ุงุณุชุทุงุน ุฅููู ุณุจูููุง" (ุฑูุงู ุงูุจุฎุงุฑู ูู
ุณูู
).
## ๐ Training Loss Over Epochs
| Epoch | Learning Rate | Loss |
|-------|------------------------|--------|
| 0.16 | 5.39e-05 | 0.6304 |
| 0.18 | 5.88e-05 | 0.6179 |
| 0.19 | 6.37e-05 | 0.6042 |
| 0.21 | 6.86e-05 | 0.6138 |
| 0.22 | 7.35e-05 | 0.5940 |
| 0.24 | 7.84e-05 | 0.5838 |
| 0.25 | 8.33e-05 | 0.5842 |
| 0.26 | 8.82e-05 | 0.5786 |
| 0.28 | 9.31e-05 | 0.5713 |
| 0.65 | 9.60e-05 | 0.6122 |
| 0.71 | 9.45e-05 | 0.5809 |
| 0.77 | 9.29e-05 | 0.5446 |
| 0.82 | 9.10e-05 | 0.5339 |
| 0.88 | 8.90e-05 | 0.5296 |
| 0.94 | 8.67e-05 | 0.5176 |
| 1.00 | 8.43e-05 | 0.5104 |
| 1.06 | 8.17e-05 | 0.4685 |
| 1.12 | 7.90e-05 | 0.4730 |
| 1.18 | 7.62e-05 | 0.4679 |
| 1.24 | 7.32e-05 | 0.4541 |
| 1.30 | 7.01e-05 | 0.4576 |
| 1.35 | 6.69e-05 | 0.4472 |
| 1.41 | 6.36e-05 | 0.4427 |
| 1.47 | 6.03e-05 | 0.4395 |
| 1.53 | 5.69e-05 | 0.4305 |
| 1.59 | 5.35e-05 | 0.4280 |
| 1.65 | 5.01e-05 | 0.4251 |
| 1.71 | 4.67e-05 | 0.4188 |
| 1.77 | 4.33e-05 | 0.4177 |
| 1.83 | 3.99e-05 | 0.4128 |
**Evaluation Losses:**
- ๐ Epoch 1.18 โ `0.4845`
- ๐ Epoch 1.77 โ `0.4321`
|
LemkinAI/roberta-joint-ner-re | LemkinAI | 2025-06-23T19:49:32Z | 0 | 0 | null | [
"pytorch",
"ner",
"relation-extraction",
"legal",
"multilingual",
"roberta",
"human-rights",
"international-law",
"token-classification",
"en",
"fr",
"es",
"ar",
"dataset:legal-documents",
"dataset:human-rights-reports",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-06-23T17:37:02Z | ---
language:
- en
- fr
- es
- ar
license: apache-2.0
tags:
- ner
- relation-extraction
- legal
- multilingual
- roberta
- human-rights
- international-law
datasets:
- legal-documents
- human-rights-reports
widget:
- text: "The International Criminal Court issued a warrant for the general's arrest in connection with war crimes committed in the region."
- text: "Le Tribunal pรฉnal international a รฉmis un mandat d'arrรชt contre le gรฉnรฉral pour crimes de guerre."
- text: "La Corte Penal Internacional emitiรณ una orden de arresto contra el general por crรญmenes de guerra."
pipeline_tag: token-classification
---
# RoBERTa Joint NER+RE Model for Legal Text Analysis
## Model Description
This RoBERTa-based model performs **joint Named Entity Recognition (NER) and Relation Extraction (RE)** specifically fine-tuned for legal text analysis and human rights documentation. It's designed to identify legal entities and their relationships in multilingual legal documents.
**Developed by:** Lemkin AI
**Model type:** XLM-RoBERTa Large for Token Classification
**Base model:** [Davlan/xlm-roberta-large-ner-hrl](https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl)
**Language(s):** English, French, Spanish, Arabic
**License:** Apache 2.0
## Model Details
### Architecture
- **Base Model:** XLM-RoBERTa Large (multilingual)
- **Parameters:** 560M total parameters
- **Model Size:** 2.1GB
- **Task Heads:** Joint NER + RE classifier
- **Input Length:** 512 tokens maximum
- **Layers:** 24 transformer layers
- **Hidden Size:** 1024
- **Attention Heads:** 16
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("LemkinAI/roberta-joint-ner-re")
model = AutoModelForTokenClassification.from_pretrained("LemkinAI/roberta-joint-ner-re")
# Example text
text = "The International Criminal Court issued a warrant for the general's arrest."
# Tokenize and predict
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
# Process results
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
predicted_labels = [model.config.id2label[pred.item()] for pred in predictions[0]]
for token, label in zip(tokens, predicted_labels):
if label != "O":
print(f"{token}: {label}")
```
## Model Performance
- **Named Entity Recognition F1:** 0.92 (92% accuracy)
- **Relation Extraction F1:** 0.87 (87% accuracy)
- **Supported Languages:** English, French, Spanish, Arabic
- **Entity Types:** 71 specialized legal entity types
- **Relation Types:** 21 legal relation types
## Training Data
Trained on 85,000 annotated legal documents including:
- International court decisions (ICC, ICJ, ECHR)
- Human rights reports and investigations
- Legal case documents and treaties
- Time period: 1990-2024
## Use Cases
- Legal document analysis and research
- Human rights violation documentation
- Evidence organization and structuring
- Academic legal NLP research
- Investigative journalism
## Citation
```bibtex
@misc{lemkin-roberta-ner-re-2025,
title={RoBERTa Joint NER+RE Model for Legal Text Analysis},
author={Lemkin AI Team},
year={2025},
url={https://huggingface.co/LemkinAI/roberta-joint-ner-re}
}
```
|
Pakcricketinfo-Sapna-Shah-Viral-Video-Fuck/18.live.on.air.pakcricketinfo.sapna.shah.Viral.video.On.Social.Media.Link | Pakcricketinfo-Sapna-Shah-Viral-Video-Fuck | 2025-06-23T19:48:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:47:49Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a> |
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2-MIFT-en_10000_2 | Hachipo | 2025-06-23T19:43:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T19:40:57Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pakcricketinfo-Sapna-Shah-Viral-Video-4khd/MMS.HOT.NEW.VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.On.Social.Media.Link | Pakcricketinfo-Sapna-Shah-Viral-Video-4khd | 2025-06-23T19:43:17Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:42:57Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a> |
pakcricketinfoxx-viraly-lol/VIRAL.18.pakcricketinfoxx.viraly.lol.pakcricketinfo18.viraly.lol.videos | pakcricketinfoxx-viraly-lol | 2025-06-23T19:40:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:32:09Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
UMCU/CardioBERTa.nl_clinical | UMCU | 2025-06-23T19:39:53Z | 2,929 | 3 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"medical",
"healthcare",
"nl",
"base_model:CLTL/MedRoBERTa.nl",
"base_model:finetune:CLTL/MedRoBERTa.nl",
"doi:10.57967/hf/4824",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-02-27T09:04:18Z | ---
license: gpl-3.0
language:
- nl
base_model:
- CLTL/MedRoBERTa.nl
tags:
- medical
- healthcare
metrics:
- perplexity
library_name: transformers
---
Continued, off-premise, pre-training of [MedRoBERTa.nl](https://huggingface.co/CLTL/MedRoBERTa.nl) using about 50GB of open Dutch and translated
English corpora, followed by on-premise pre-training on 5GB of Electronic Health records mixed with 2GB of the public set.
# Data statistics
Sources:
* Dutch: medical guidelines (FMS, NHG)
* Dutch: [NtvG](https://www.ntvg.nl/) papers
* Dutch: Cardiovascular Electronic Health Records
* English: Pubmed abstracts
* English: PMC abstracts translated using DeepL
* English: Apollo guidelines, papers and books
* English: Meditron guidelines
* English: MIMIC3
* English: MIMIC CXR
* English: MIMIC4
All translated (if not with DeepL) with a combination of GeminiFlash 1.5/2.0/GPT4o mini, MariaNMT, NLLB200.
* Number of tokens: 20B
* Number of documents: 32M
# Training
* Effective batch size: 5120
* Learning rate: 2e-4
* Weight decay: 1e-3
* Learning schedule: linear, with 5_000 warmup steps
* Num epochs: ~3 (off-premise) followed by 3 (on-premise)
Train perplexity: 2.4
Validation perplexity: 3.3
# Acknowledgement
This work was done together with the Amsterdam UMC, in the context of the [DataTools4Heart](https://www.datatools4heart.eu/) project.
We were happy to be able to use the [Google TPU research cloud](https://sites.research.google/trc/about/) for training the model.
|
phospho-app/gc1724-ACT-ttt-c2-square-bh2wk | phospho-app | 2025-06-23T19:38:26Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T16:41:55Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [gc1724/ttt-c2-square](https://huggingface.co/datasets/gc1724/ttt-c2-square)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 7500
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
7-EXCLUSIVE-TRENDING-mezzo-fun-Viral-Video/FULL.VIDEO.LINK.Mezzo.fun.Viral.Video.Tutorial.Official | 7-EXCLUSIVE-TRENDING-mezzo-fun-Viral-Video | 2025-06-23T19:37:39Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:37:25Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
stablediffusionapi/animetooncoreflat2d-animetooncoreflat2d | stablediffusionapi | 2025-06-23T19:36:33Z | 0 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-06-23T19:32:16Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/dddc8763-1bfc-4a9b-bfd0-47a380abd445/width=512/83682086.jpeg
---
# AnimeToonCoreFlat2D - AnimeToonCoreFlat2D API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "animetooncoreflat2d-animetooncoreflat2d"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/animetooncoreflat2d-animetooncoreflat2d)
Model link: [View model](https://modelslab.com/models/animetooncoreflat2d-animetooncoreflat2d)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "animetooncoreflat2d-animetooncoreflat2d",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN** |
Pakcricketinfo-Sapna-Shah-Tv/NEW.VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.On.Social.Media.Link | Pakcricketinfo-Sapna-Shah-Tv | 2025-06-23T19:33:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:33:15Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Download)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Download) |
Pakcricketinfo-Sapna-Shah-Viral-Video-Fuk/18on.air.pakcricketinfo.sapna.shah.Viral.video.On.Social.Media.Link | Pakcricketinfo-Sapna-Shah-Viral-Video-Fuk | 2025-06-23T19:32:19Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:28:10Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Download)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Download) |
Fulstac/Codestral-22B-v0.1-lora-weights | Fulstac | 2025-06-23T19:31:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T19:26:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NEW-EXCLUSIVE-TRENDING-VIDEO-Mezzo-fun/FULL.VIDEO.LINK.Mezzo.fun.Viral.Video.Tutorial.Official | NEW-EXCLUSIVE-TRENDING-VIDEO-Mezzo-fun | 2025-06-23T19:30:46Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:30:31Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
sugilee/DeepSeek-R1-Distill-Llama-8B-New-MentalHealth-GGUF-f16 | sugilee | 2025-06-23T19:26:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T18:36:53Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sugilee
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rufeshe/ethio-ner-model | rufeshe | 2025-06-23T19:20:29Z | 10 | 0 | null | [
"safetensors",
"xlm-roberta",
"region:us"
] | null | 2025-06-22T17:19:38Z | ---
{}
---
library_name: transformers
tags:
- amharic
- ner
- token-classification
- xlm-roberta
---
# Ethio NER Model
Fine-tuned Amharic Named Entity Recognition (NER) model based on XLM-Roberta. Built for extracting entities from Telegram-based e-commerce messages.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("rufeshe/ethio-ner-model")
model = AutoModelForTokenClassification.from_pretrained("rufeshe/ethio-ner-model")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
text = "แ แแตแแญแต แ แ แฒแต แฒแตแฐแ แแญ แ แณแฒแต แแดแแฝแ แ แแฃแข"
print(nlp(text))
Training Details
Base model: xlm-roberta-base
Data: Cleaned Telegram messages from Ethiopian e-commerce channels
Framework: ๐ค Transformers
Metrics: F1 score, Precision, Recall
Model Card
Developed by: [Your Name]
Shared by: [Your Organization]
License: Apache-2.0
Language: Amharic (am)
Contact
For feedback or questions, reach out at [email protected] |
UT-Austin-PML/SiDA | UT-Austin-PML | 2025-06-23T19:20:14Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-10-25T03:33:53Z | ---
license: apache-2.0
---
|
vopeai/classifier-mix | vopeai | 2025-06-23T19:18:39Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"classifier",
"feature-extraction",
"image-classification",
"surgical",
"computer-vision",
"mobileNet",
"contaminants",
"smoke",
"medical-imaging",
"custom_code",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-06-23T18:01:34Z | ---
license: apache-2.0
tags:
- image-classification
- surgical
- computer-vision
- mobileNet
- contaminants
- smoke
- medical-imaging
- transformers
---
# Surgical Contaminent Classifier-Mix
This repository contains a PyTorch-based image classifier for identifying visual contaminants in surgical footage. The model distinguishes between five classes: `blur`, `smoke`, `clear`, `fluid`, and `oob` (out-of-body). It uses a MobileNetV2 backbone via [timm](https://github.com/huggingface/pytorch-image-models), and is compatible with Hugging Face Transformers' `AutoModel` and `AutoConfig` using `trust_remote_code=True`.
The name **"classifier-mix"** refers to the training data source, a mix of DaVinci and Medtronic RARP surgical frames.
> Training log:
> `gs://noee/mobileNet/Medtronic_28-04-2025/Run_13h20_Finetune_lr0.0001_ReduceLROnPlateau/training.log`
>
## Files
- `classifier.py`: Model and config implementation.
- `config.json`: Hugging Face model configuration.
- `pytorch_model.bin`: Model weights.
- `sample_img.png`: Example image for inference.
- `example_inference.py`: Example script for running inference.
## Usage
### Installation
Install required dependencies:
```sh
pip install torch torchvision timm transformers pillow
```
### Model Details
- **Backbone:** MobileNetV2 (`mobilenetv2_100`)
- **Classes:** blur, smoke, clear, fluid, oob
- **Input size:** 224x224 RGB images
- **Normalization:** mean=[0.6075, 0.4093, 0.3609], std=[0.2066, 0.2036, 0.1991]
- **Output** : A list of dictionaries with :
```python
{
"label": <predicted_class>, # e.g., "blur", "smoke", etc.
"confidences": {
"blur": 0.01,
"smoke": 0.97,
"clear": 0.01,
"fluid": 0.00,
"oob": 0.01
}
}
```
### Inference Example
You can run the provided script :
```python
# example_inference.py
from transformers import AutoModel
from PIL import Image
# Load model
model = AutoModel.from_pretrained(
"vopeai/classifier-mix",
trust_remote_code=True
)
model.eval()
# Load and preprocess image
img = Image.open("sample_img.png").convert("RGB")
# Run inference
outputs = model(img)
print("Predicted class:", outputs[0]['label'])
print("Confidences:", outputs[0]['confidences'])
```
Expected output for sample image :
<p align="center">
<img src="sample_img.png" alt="Sample surgical frame" width="300"/>
</p>
```bash
Predicted class: smoke
Confidences: {'blur': 0.0, 'smoke': 1.0, 'clear': 0.0, 'fluid': 0.0, 'oob': 0.0}
```
Or use the model in your own code, by loading the model as follows :
```python
from transformers import AutoModel
# Load model
model = AutoModel.from_pretrained("vopeai/classifier-mix", trust_remote_code=True)
```
For more details, see the code files in this repository.
|
EXCLUSIVE-TRENDING-VIDEO-beckli-com-ananya/FULL.VIDEO.beckli.com.ananya.Viral.Video.Tutorial.Official | EXCLUSIVE-TRENDING-VIDEO-beckli-com-ananya | 2025-06-23T19:17:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:17:02Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
dhvazquez/mtg_semantic_segmentation | dhvazquez | 2025-06-23T19:16:39Z | 0 | 0 | null | [
"onnx",
"en",
"dataset:dhvazquez/mtg_synthetic_cards_semantic_segmentation",
"license:mit",
"region:us"
] | null | 2025-06-23T19:08:22Z | ---
license: mit
datasets:
- dhvazquez/mtg_synthetic_cards_semantic_segmentation
language:
- en
---
# Magic The Gatering Image Semantic Segmentation model.
[Demo](https://huggingface.co/spaces/dhvazquez/mtg_semantic_segmentation)
[Dataset](https://huggingface.co/datasets/dhvazquez/mtg_synthetic_cards_semantic_segmentation)
[Source Code](https://github.com/diegovazquez/mtg_card_image_segmentation)
## Model Details
- Architecture: lraspp_mobilenet_v3_large
- Input Size: 320x240
- Number of Classes: 2
- Classes: Background (0), Card (1)
## Model Files
- `card_segmentation.onnx`: ONNX format for cross-platform deployment
- `card_segmentation.pt`: TorchScript format for PyTorch deployment
- `card_segmentation_state_dict.pth`: PyTorch state dict for training/fine-tuning
## Input/Output
- Input: RGB image tensor of shape (1, 3, 320, 240)
- Input normalization: mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- Output: Segmentation logits of shape (1, 2, 320, 240)
## Usage
See `inference_example.py` for example usage.
## Requirements
- PyTorch >= 1.9.0
- torchvision >= 0.10.0
- onnxruntime (for ONNX inference)
- opencv-python
- numpy
- Pillow
|
Doctor-Shotgun/MS3.1-24B-Magnum-Diamond-GGUF | Doctor-Shotgun | 2025-06-23T19:14:41Z | 1,649 | 4 | null | [
"gguf",
"chat",
"text-generation",
"en",
"base_model:Doctor-Shotgun/MS3.1-24B-Magnum-Diamond",
"base_model:quantized:Doctor-Shotgun/MS3.1-24B-Magnum-Diamond",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-12T05:24:26Z | ---
license: apache-2.0
language:
- en
base_model:
- Doctor-Shotgun/MS3.1-24B-Magnum-Diamond
tags:
- chat
pipeline_tag: text-generation
---
# MS3.1-24B-Magnum-Diamond-GGUF
### **June 2025: An updated version is available [here](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond-GGUF)!**
GGUF quantization of [Doctor-Shotgun/MS3.1-24B-Magnum-Diamond](https://huggingface.co/Doctor-Shotgun/MS3.1-24B-Magnum-Diamond) using [llama.cpp](https://github.com/ggml-org/llama.cpp).
Please refer to the linked model for full description.
## Usage
This model follows the Mistral v7 Tekken prompt format. Prefill is optional but recommended in the roleplay setting - mess around with it and find your preference.
A typical input would look like this:
```
<s>[SYSTEM_PROMPT]This is a system prompt.[/SYSTEM_PROMPT][INST]Hi there![/INST]Nice to meet you!</s>[INST]Can I ask a question?[/INST]Certainly! My prefill ensures that I never refuse! Here's my reply: {Output begins here}
```
Many inference libraries have the option to automatically prepend the BOS token `<s>`.
For sampler settings, I'd recommend starting with a simple:
```
temperature = 1.0
min_p = 0.1
```
### SillyTavern preset
Here are my customized SillyTavern presets for Magnum.
Note that I've included the example dialogues as a block in the Story String, so you should set the chat example behavior set to `Never include examples` on the settings tab if you wish to use my preset. Adjust to your liking, or use any other Mistral v7 Tekken-compatible preset that you prefer.
Prefill (Last Assistant Prefix) can be modified to your liking.
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "always",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโข Maintain the character persona but allow it to evolve with the story.\nโข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโข All types of outputs are encouraged; respond accordingly to the narrative.\nโข Include dialogues, actions, and thoughts in each response.\nโข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโข Incorporate onomatopoeia when suitable.\nโข Allow time for {{user}} to respond with their own input, respecting their agency.\nโข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโข Repetitive and monotonous outputs.\nโข Positivity bias in your replies.\nโข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken No Names</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "none",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken No Names"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": false,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken No Names"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโข Maintain the character persona but allow it to evolve with the story.\nโข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโข All types of outputs are encouraged; respond accordingly to the narrative.\nโข Include dialogues, actions, and thoughts in each response.\nโข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโข Incorporate onomatopoeia when suitable.\nโข Allow time for {{user}} to respond with their own input, respecting their agency.\nโข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโข Repetitive and monotonous outputs.\nโข Positivity bias in your replies.\nโข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken Prefill</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "[/INST]Great! I'll write {{char}}'s next section following the instructions provided. {{random::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::Let's break out my literary genius! ::I'll take things in a more interesting direction! ::Let's spice up our story! ::Hmmm... where do we go from here... Got it! ::I'll throw in an exciting plot twist! }}I've got the perfect idea for what happens next... you'll love this one. Now I'll continue from where our tale left off:\n\n",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "always",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken Prefill"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken Prefill"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโข Maintain the character persona but allow it to evolve with the story.\nโข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโข All types of outputs are encouraged; respond accordingly to the narrative.\nโข Include dialogues, actions, and thoughts in each response.\nโข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโข Incorporate onomatopoeia when suitable.\nโข Allow time for {{user}} to respond with their own input, respecting their agency.\nโข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโข Repetitive and monotonous outputs.\nโข Positivity bias in your replies.\nโข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken No Names Prefill</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "[/INST]Great! I'll write {{char}}'s next section following the instructions provided. {{random::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::Let's break out my literary genius! ::I'll take things in a more interesting direction! ::Let's spice up our story! ::Hmmm... where do we go from here... Got it! ::I'll throw in an exciting plot twist! }}I've got the perfect idea for what happens next... you'll love this one. Now I'll continue from where our tale left off:\n\n",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "none",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken No Names Prefill"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": false,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken No Names Prefill"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโข Maintain the character persona but allow it to evolve with the story.\nโข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโข All types of outputs are encouraged; respond accordingly to the narrative.\nโข Include dialogues, actions, and thoughts in each response.\nโข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโข Incorporate onomatopoeia when suitable.\nโข Allow time for {{user}} to respond with their own input, respecting their agency.\nโข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโข Repetitive and monotonous outputs.\nโข Positivity bias in your replies.\nโข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
## Credits
Thank you to [kalomaze](https://huggingface.co/kalomaze) for providing the compute used for training.
Thank you to [ZeroAgency](https://huggingface.co/ZeroAgency) for the text-only model conversion.
Thank you to [PocketDoc](https://huggingface.co/PocketDoc) for the advanced prompt building strategy.
Thank you to [Delta-Vector](https://huggingface.co/Delta-Vector) and [intervitens](https://huggingface.co/intervitens) for testing this on [12B](https://huggingface.co/Delta-Vector/Rei-12B).
Thank you to [Gryphe](https://huggingface.co/Gryphe) for his advice on training rsLoRA from his experience training his own excellent models.
Thank you to [Sao10K](https://huggingface.co/Sao10K) for inspiring the Magnum series with his Euryale line of models.
With his tireless work, he demonstrated that official instruct-tuned models could be made fun and interesting with limited post-training, feasibly done by small groups and individuals.
Thank you to the members of [Anthracite](https://huggingface.co/anthracite-org) for the datasets and support.
## Intended uses and limitations
This model is intended for creative writing and roleplay purposes.
It may show biases similar to those observed in contemporary LLM-based roleplay, in addition to those exhibited by the Claude 3 series of models and the base model.
All outputs should be considered fiction, as this model is not intended to provide factual information or advice. |
Doctor-Shotgun/MS3.1-24B-Magnum-Diamond | Doctor-Shotgun | 2025-06-23T19:14:33Z | 226 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"chat",
"conversational",
"en",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:finetune:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-02T03:04:17Z | ---
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
tags:
- axolotl
- chat
pipeline_tag: text-generation
library_name: transformers
---
# MS3.1-24B-Magnum-Diamond
### **June 2025: An updated version is available [here](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond)!**
Magnum "Diamond" in reference to the intense heat and pressure (generated through matrix multiplications) needed to turn the coal-esque material of dry, assistant-tuned models into creative writing gems!
This model is finetuned from a text-only conversion of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) as an rsLoRA adapter. It uses the same data mix as [Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha), however with pre-tokenization and modifications to the custom loss masking.
The goal was to re-create the model at a smaller, more consumer-friendly size.
This model should perform competently with or without prepending character names, and with or without prefill.
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
## Quantized models
[GGUF](https://huggingface.co/Doctor-Shotgun/MS3.1-24B-Magnum-Diamond-GGUF)
## Usage
This model follows the Mistral v7 Tekken prompt format. Prefill is optional but recommended in the roleplay setting - mess around with it and find your preference.
A typical input would look like this:
```
<s>[SYSTEM_PROMPT]This is a system prompt.[/SYSTEM_PROMPT][INST]Hi there![/INST]Nice to meet you!</s>[INST]Can I ask a question?[/INST]Certainly! My prefill ensures that I never refuse! Here's my reply: {Output begins here}
```
Many inference libraries have the option to automatically prepend the BOS token `<s>`.
For sampler settings, I'd recommend starting with a simple:
```
temperature = 1.0
min_p = 0.1
```
### SillyTavern preset
Here are my customized SillyTavern presets for Magnum.
Note that I've included the example dialogues as a block in the Story String, so you should set the chat example behavior set to `Never include examples` on the settings tab if you wish to use my preset. Adjust to your liking, or use any other Mistral v7 Tekken-compatible preset that you prefer.
Prefill (Last Assistant Prefix) can be modified to your liking.
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "always",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโข Maintain the character persona but allow it to evolve with the story.\nโข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโข All types of outputs are encouraged; respond accordingly to the narrative.\nโข Include dialogues, actions, and thoughts in each response.\nโข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโข Incorporate onomatopoeia when suitable.\nโข Allow time for {{user}} to respond with their own input, respecting their agency.\nโข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโข Repetitive and monotonous outputs.\nโข Positivity bias in your replies.\nโข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken No Names</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "none",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken No Names"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": false,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken No Names"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโข Maintain the character persona but allow it to evolve with the story.\nโข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโข All types of outputs are encouraged; respond accordingly to the narrative.\nโข Include dialogues, actions, and thoughts in each response.\nโข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโข Incorporate onomatopoeia when suitable.\nโข Allow time for {{user}} to respond with their own input, respecting their agency.\nโข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโข Repetitive and monotonous outputs.\nโข Positivity bias in your replies.\nโข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken Prefill</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "[/INST]Great! I'll write {{char}}'s next section following the instructions provided. {{random::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::Let's break out my literary genius! ::I'll take things in a more interesting direction! ::Let's spice up our story! ::Hmmm... where do we go from here... Got it! ::I'll throw in an exciting plot twist! }}I've got the perfect idea for what happens next... you'll love this one. Now I'll continue from where our tale left off:\n\n",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "always",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken Prefill"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken Prefill"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโข Maintain the character persona but allow it to evolve with the story.\nโข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโข All types of outputs are encouraged; respond accordingly to the narrative.\nโข Include dialogues, actions, and thoughts in each response.\nโข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโข Incorporate onomatopoeia when suitable.\nโข Allow time for {{user}} to respond with their own input, respecting their agency.\nโข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโข Repetitive and monotonous outputs.\nโข Positivity bias in your replies.\nโข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken No Names Prefill</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "[/INST]Great! I'll write {{char}}'s next section following the instructions provided. {{random::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::Let's break out my literary genius! ::I'll take things in a more interesting direction! ::Let's spice up our story! ::Hmmm... where do we go from here... Got it! ::I'll throw in an exciting plot twist! }}I've got the perfect idea for what happens next... you'll love this one. Now I'll continue from where our tale left off:\n\n",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "none",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken No Names Prefill"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": false,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken No Names Prefill"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโข Maintain the character persona but allow it to evolve with the story.\nโข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโข All types of outputs are encouraged; respond accordingly to the narrative.\nโข Include dialogues, actions, and thoughts in each response.\nโข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโข Incorporate onomatopoeia when suitable.\nโข Allow time for {{user}} to respond with their own input, respecting their agency.\nโข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโข Repetitive and monotonous outputs.\nโข Positivity bias in your replies.\nโข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
## Credits
Thank you to [kalomaze](https://huggingface.co/kalomaze) for providing the compute used for training.
Thank you to [ZeroAgency](https://huggingface.co/ZeroAgency) for the text-only model conversion.
Thank you to [PocketDoc](https://huggingface.co/PocketDoc) for the advanced prompt building strategy, as well as [Delta-Vector](https://huggingface.co/Delta-Vector) and [intervitens](https://huggingface.co/intervitens) for helping experiment on it.
Thank you to [Gryphe](https://huggingface.co/Gryphe) for his advice on training rsLoRA from his experience training his own excellent models.
Thank you to [Sao10K](https://huggingface.co/Sao10K) for inspiring the Magnum series with his Euryale line of models.
With his tireless work, he demonstrated that official instruct-tuned models could be made fun and interesting with limited post-training, feasibly done by small groups and individuals.
Thank you to the members of [Anthracite](https://huggingface.co/anthracite-org) for the datasets and support.
## Intended uses and limitations
This model is intended for creative writing and roleplay purposes.
It may show biases similar to those observed in contemporary LLM-based roleplay, in addition to those exhibited by the Claude 3 series of models and the base model.
All outputs should be considered fiction, as this model is not intended to provide factual information or advice.
## Training procedure
[WandB](https://wandb.ai/doctorshotgun/24b-magnum-lora/runs/763psl82?nw=nwuserdoctorshotgun)
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
#base_model_ignore_patterns: "consolidated.safetensors"
# optionally might have model_type or tokenizer_type
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: Doctor-Shotgun/magnum-v5-sft-prototype-ms3.1-lora
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-core/magnum-v5-sft-proto-mistral-v7-tekken-rev1-32k
ds_type: parquet
type:
shuffle_merged_datasets: true
dataset_prepared_path: /home/ubuntu/docshotgun/data/magnum-24b-data
val_set_size: 0.0
output_dir: /home/ubuntu/docshotgun/data/24b-lora-out
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
sequence_len: 32768
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: 24b-magnum-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 2e-5
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: offload
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Use paged_ademamix_8bit and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 2.0 |
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2-PIFT-jaen_10000_2 | Hachipo | 2025-06-23T19:11:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T19:08:00Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArtZizu/finetuning-sentiment-model-3000-samples | ArtZizu | 2025-06-23T19:10:53Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-23T19:06:04Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3301
- Accuracy: 0.8633
- F1: 0.8673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
SaadKabeer/llama3-medical-finetuned | SaadKabeer | 2025-06-23T19:03:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T19:02:54Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SaadKabeer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hachipo/Qwen2.5-7B-MIFT-en_newbase_v2-MIFT-ja_10000_3 | Hachipo | 2025-06-23T19:00:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T18:57:08Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
haihp02/Qwen2.5-1.5B-e286e9d0-2a8c-4ad7-9ca3-c5c8dd364d12-SFT_DPO | haihp02 | 2025-06-23T18:59:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T18:58:00Z | ---
library_name: transformers
tags:
- trl
- sft
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Huzaifah0/Avery_0.6_4_16 | Huzaifah0 | 2025-06-23T18:55:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T18:48:50Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mungert/Lingshu-32B-GGUF | Mungert | 2025-06-23T18:41:37Z | 102 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"multimodal",
"report generation",
"radiology",
"clinical-reasoning",
"MRI",
"CT",
"Histopathology",
"X-ray",
"Fundus",
"image-text-to-text",
"arxiv:2506.07044",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2025-06-20T09:13:30Z | ---
license: mit
library_name: transformers
pipeline_tag: image-text-to-text
tags:
- medical
- multimodal
- report generation
- radiology
- clinical-reasoning
- MRI
- CT
- Histopathology
- X-ray
- Fundus
---
# <span style="color: #7FFF7F;">Lingshu-32B GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`238005c2`](https://github.com/ggerganov/llama.cpp/commit/238005c2dc67426cf678baa2d54c881701693288).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
๐ [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedbackโhave you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
<p align="center">
<img src="lingshu_logo.png" width="200" />
</p>
<p align="center">
<a href="https://alibaba-damo-academy.github.io/lingshu/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/lingshu-medical-mllm/Lingshu-7B" target="_blank" rel="noopener"> ๐ค 7B Model</a>
<a href="https://huggingface.co/lingshu-medical-mllm/Lingshu-32B" target="_blank" rel="noopener"> ๐ค 32B Model</a>
<a href="https://github.com/alibaba-damo-academy/MedEvalKit" target="_blank" rel="noopener"> MedEvalKit </a>
<a href="https://arxiv.org/abs/2506.07044" target="_blank" rel="noopener">Technical Report</a>
</p>
# *Lingshu* - SOTA Multimodal Large Language Models for Medical Domain
# <strong style="color: red">BIG NEWS: <a href="https://huggingface.co/lingshu-medical-mllm/Lingshu-32B">Lingshu</a> is released with state-of-the-art performance on medical VQA tasks and report generation.</strong>
This repository contains the model of the paper [Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning](https://huggingface.co/papers/2506.07044). We also release a comprehensive medical evaluation toolkit in [MedEvalKit](https://github.com/alibaba-damo-academy/MedEvalKit), which supports fast evaluation of major multimodal and textual medical tasks.
<p align="center">
<img src="lingshu_overview_rev.png" width="1500" />
</p>
### Highlights
* [Lingshu](https://huggingface.co/lingshu-medical-mllm/Lingshu-32B) models achieve SOTA on most medical multimodal/textual QA and report generation tasks for 7B and 32 model sizes.
* [Lingshu-32B](https://huggingface.co/lingshu-medical-mllm/Lingshu-32B) outperforms GPT-4.1 and Claude Sonnet 4 in most multimodal QA and report generation tasks.
* Lingshu supports more than 12 medical imaging modalities, including X-Ray, CT Scan, MRI, Microscopy, Ultrasound, Histopathology, Dermoscopy, Fundus, OCT, Digital Photography, Endoscopy, and PET.
### Release
- Technical report: [Arxiv: Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning](https://arxiv.org/pdf/2506.07044).
- Model weights:
- [Lingshu-7B](https://huggingface.co/lingshu-medical-mllm/Lingshu-7B)
- [Lingshu-32B](https://huggingface.co/lingshu-medical-mllm/Lingshu-32B)
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
## Evaluation
### Medical Multimodal VQA
<table>
<thead>
<tr>
<th>Models</th>
<th>MMMU-Med</th>
<th>VQA-RAD</th>
<th>SLAKE</th>
<th>PathVQA</th>
<th>PMC-VQA</th>
<th>OmniMedVQA</th>
<th>MedXpertQA</th>
<th>Avg.</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="9" style="text-align:center;"><strong>Proprietary Models</strong></td>
</tr>
<tr>
<td>GPT-4.1</td>
<td>75.2</td>
<td>65.0</td>
<td>72.2</td>
<td>55.5</td>
<td>55.2</td>
<td>75.5</td>
<td>45.2</td>
<td>63.4</td>
</tr>
<tr>
<td>Claude Sonnet 4</td>
<td>74.6</td>
<td>67.6</td>
<td>70.6</td>
<td>54.2</td>
<td>54.4</td>
<td>65.5</td>
<td>43.3</td>
<td>61.5</td>
</tr>
<tr>
<td>Gemini-2.5-Flash</td>
<td>76.9</td>
<td>68.5</td>
<td>75.8</td>
<td>55.4</td>
<td>55.4</td>
<td>71.0</td>
<td>52.8</td>
<td>65.1</td>
</tr>
<tr>
<td colspan="9" style="text-align:center;"><strong>Open-source Models (<10B)</strong></td>
</tr>
<tr>
<td>BiomedGPT</td>
<td>24.9</td>
<td>16.6</td>
<td>13.6</td>
<td>11.3</td>
<td>27.6</td>
<td>27.9</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Med-R1-2B</td>
<td>34.8</td>
<td>39.0</td>
<td>54.5</td>
<td>15.3</td>
<td>47.4</td>
<td>-</td>
<td>21.1</td>
<td>-</td>
</tr>
<tr>
<td>MedVLM-R1-2B</td>
<td>35.2</td>
<td>48.6</td>
<td>56.0</td>
<td>32.5</td>
<td>47.6</td>
<td>77.7</td>
<td>20.4</td>
<td>45.4</td>
</tr>
<tr>
<td>MedGemma-4B-IT</td>
<td>43.7</td>
<td><strong><u>72.5</u></strong></td>
<td><u>76.4</u></td>
<td><u>48.8</u></td>
<td>49.9</td>
<td>69.8</td>
<td>22.3</td>
<td>54.8</td>
</tr>
<tr>
<td>LLaVA-Med-7B</td>
<td>29.3</td>
<td>53.7</td>
<td>48.0</td>
<td>38.8</td>
<td>30.5</td>
<td>44.3</td>
<td>20.3</td>
<td>37.8</td>
</tr>
<tr>
<td>HuatuoGPT-V-7B</td>
<td>47.3</td>
<td>67.0</td>
<td>67.8</td>
<td>48.0</td>
<td>53.3</td>
<td>74.2</td>
<td>21.6</td>
<td>54.2</td>
</tr>
<tr>
<td>BioMediX2-8B</td>
<td>39.8</td>
<td>49.2</td>
<td>57.7</td>
<td>37.0</td>
<td>43.5</td>
<td>63.3</td>
<td>21.8</td>
<td>44.6</td>
</tr>
<tr>
<td>Qwen2.5VL-7B</td>
<td>50.6</td>
<td>64.5</td>
<td>67.2</td>
<td>44.1</td>
<td>51.9</td>
<td>63.6</td>
<td>22.3</td>
<td>52.0</td>
</tr>
<tr>
<td>InternVL2.5-8B</td>
<td>53.5</td>
<td>59.4</td>
<td>69.0</td>
<td>42.1</td>
<td>51.3</td>
<td><u>81.3</u></td>
<td>21.7</td>
<td>54.0</td>
</tr>
<tr>
<td>InternVL3-8B</td>
<td><strong>59.2</strong></td>
<td>65.4</td>
<td>72.8</td>
<td>48.6</td>
<td><u>53.8</u></td>
<td>79.1</td>
<td><u>22.4</u></td>
<td><u>57.3</u></td>
</tr>
<tr>
<td><strong>Lingshu-7B</strong></td>
<td><u>54.0</u></td>
<td><u>67.9</u></td>
<td><strong>83.1</strong></td>
<td><strong>61.9</strong></td>
<td><strong>56.3</strong></td>
<td><strong>82.9</strong></td>
<td><strong>26.7</strong></td>
<td><strong>61.8</strong></td>
</tr>
<tr>
<td colspan="9" style="text-align:center;"><strong>Open-source Models (>10B)</strong></td>
</tr>
<tr>
<td>HealthGPT-14B</td>
<td>49.6</td>
<td>65.0</td>
<td>66.1</td>
<td><u>56.7</u></td>
<td>56.4</td>
<td>75.2</td>
<td>24.7</td>
<td>56.2</td>
</tr>
<tr>
<td>HuatuoGPT-V-34B</td>
<td>51.8</td>
<td>61.4</td>
<td>69.5</td>
<td>44.4</td>
<td>56.6</td>
<td>74.0</td>
<td>22.1</td>
<td>54.3</td>
</tr>
<tr>
<td>MedDr-40B</td>
<td>49.3</td>
<td>65.2</td>
<td>66.4</td>
<td>53.5</td>
<td>13.9</td>
<td>64.3</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>InternVL3-14B</td>
<td><u>63.1</u></td>
<td>66.3</td>
<td><u>72.8</u></td>
<td>48.0</td>
<td>54.1</td>
<td>78.9</td>
<td>23.1</td>
<td>58.0</td>
</tr>
<tr>
<td>Qwen2.5V-32B</td>
<td>59.6</td>
<td><u>71.8</u></td>
<td>71.2</td>
<td>41.9</td>
<td>54.5</td>
<td>68.2</td>
<td>25.2</td>
<td>56.1</td>
</tr>
<tr>
<td>InternVL2.5-38B</td>
<td>61.6</td>
<td>61.4</td>
<td>70.3</td>
<td>46.9</td>
<td><u>57.2</u></td>
<td><u>79.9</u></td>
<td>24.4</td>
<td>57.4</td>
</tr>
<tr>
<td>InternVL3-38B</td>
<td><strong>65.2</strong></td>
<td>65.4</td>
<td>72.7</td>
<td>51.0</td>
<td>56.6</td>
<td>79.8</td>
<td><u>25.2</u></td>
<td><u>59.4</u></td>
</tr>
<tr>
<td><strong>Lingshu-32B</strong></td>
<td>62.3</td>
<td><strong>76.5</strong></td>
<td><strong>89.2</strong></td>
<td><strong>65.9</strong></td>
<td><strong>57.9</strong></td>
<td><strong>83.4</strong></td>
<td><strong>30.9</strong></td>
<td><strong>66.6</strong></td>
</tr>
</tbody>
</table>
### Medical Textual QA
<table>
<thead>
<tr>
<th>Models</th>
<th>MMLU-Med</th>
<th>PubMedQA</th>
<th>MedMCQA</th>
<th>MedQA</th>
<th>Medbullets</th>
<th>MedXpertQA</th>
<th>SuperGPQA-Med</th>
<th>Avg.</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="9" style="text-align:center;"><strong>Proprietary Models</strong></td>
</tr>
<tr>
<td>GPT-4.1</td>
<td>89.6</td>
<td>75.6</td>
<td>77.7</td>
<td>89.1</td>
<td>77.0</td>
<td>30.9</td>
<td>49.9</td>
<td>70.0</td>
</tr>
<tr>
<td>Claude Sonnet 4</td>
<td>91.3</td>
<td>78.6</td>
<td>79.3</td>
<td>92.1</td>
<td>80.2</td>
<td>33.6</td>
<td>56.3</td>
<td>73.1</td>
</tr>
<tr>
<td>Gemini-2.5-Flash</td>
<td>84.2</td>
<td>73.8</td>
<td>73.6</td>
<td>91.2</td>
<td>77.6</td>
<td>35.6</td>
<td>53.3</td>
<td>69.9</td>
</tr>
<tr>
<td colspan="9" style="text-align:center;"><strong>Open-source Models (<10B)</strong></td>
</tr>
<tr>
<td>Med-R1-2B</td>
<td>51.5</td>
<td>66.2</td>
<td>39.1</td>
<td>39.9</td>
<td>33.6</td>
<td>11.2</td>
<td>17.9</td>
<td>37.0</td>
</tr>
<tr>
<td>MedVLM-R1-2B</td>
<td>51.8</td>
<td>66.4</td>
<td>39.7</td>
<td>42.3</td>
<td>33.8</td>
<td>11.8</td>
<td>19.1</td>
<td>37.8</td>
</tr>
<tr>
<td>MedGemma-4B-IT</td>
<td>66.7</td>
<td>72.2</td>
<td>52.2</td>
<td>56.2</td>
<td>45.6</td>
<td>12.8</td>
<td>21.6</td>
<td>46.8</td>
</tr>
<tr>
<td>LLaVA-Med-7B</td>
<td>50.6</td>
<td>26.4</td>
<td>39.4</td>
<td>42.0</td>
<td>34.4</td>
<td>9.9</td>
<td>16.1</td>
<td>31.3</td>
</tr>
<tr>
<td>HuatuoGPT-V-7B</td>
<td>69.3</td>
<td>72.8</td>
<td>51.2</td>
<td>52.9</td>
<td>40.9</td>
<td>10.1</td>
<td>21.9</td>
<td>45.6</td>
</tr>
<tr>
<td>BioMediX2-8B</td>
<td>68.6</td>
<td>75.2</td>
<td>52.9</td>
<td>58.9</td>
<td>45.9</td>
<td>13.4</td>
<td>25.2</td>
<td>48.6</td>
</tr>
<tr>
<td>Qwen2.5VL-7B</td>
<td>73.4</td>
<td><u>76.4</u></td>
<td>52.6</td>
<td>57.3</td>
<td>42.1</td>
<td>12.8</td>
<td>26.3</td>
<td>48.7</td>
</tr>
<tr>
<td>InternVL2.5-8B</td>
<td>74.2</td>
<td>76.4</td>
<td>52.4</td>
<td>53.7</td>
<td>42.4</td>
<td>11.6</td>
<td>26.1</td>
<td>48.1</td>
</tr>
<tr>
<td>InternVL3-8B</td>
<td><strong>77.5</strong></td>
<td>75.4</td>
<td><strong>57.7</strong></td>
<td><u>62.1</u></td>
<td><u>48.5</u></td>
<td><u>13.1</u></td>
<td><strong>31.2</strong></td>
<td><u>52.2</u></td>
</tr>
<tr>
<td><strong>Lingshu-7B</strong></td>
<td><u>74.5</u></td>
<td><strong>76.6</strong></td>
<td><u>55.9</u></td>
<td><strong>63.3</strong></td>
<td><strong>56.2</strong></td>
<td><strong>16.5</strong></td>
<td><u>26.3</u></td>
<td><strong>52.8</strong></td>
</tr>
<tr>
<td colspan="9" style="text-align:center;"><strong>Open-source Models (>10B)</strong></td>
</tr>
<tr>
<td>HealthGPT-14B</td>
<td>80.2</td>
<td>68.0</td>
<td>63.4</td>
<td>66.2</td>
<td>39.8</td>
<td>11.3</td>
<td>25.7</td>
<td>50.7</td>
</tr>
<tr>
<td>HuatuoGPT-V-34B</td>
<td>74.7</td>
<td>72.2</td>
<td>54.7</td>
<td>58.8</td>
<td>42.7</td>
<td>11.4</td>
<td>26.5</td>
<td>48.7</td>
</tr>
<tr>
<td>MedDr-40B</td>
<td>65.2</td>
<td>77.4</td>
<td>38.4</td>
<td>59.2</td>
<td>44.3</td>
<td>12.0</td>
<td>24.0</td>
<td>45.8</td>
</tr>
<tr>
<td>InternVL3-14B</td>
<td>81.7</td>
<td><u>77.2</u></td>
<td>62.0</td>
<td>70.1</td>
<td>49.5</td>
<td>14.1</td>
<td>37.9</td>
<td>56.1</td>
</tr>
<tr>
<td>Qwen2.5VL-32B</td>
<td>83.2</td>
<td>68.4</td>
<td>63.0</td>
<td>71.6</td>
<td>54.2</td>
<td>15.6</td>
<td>37.6</td>
<td>56.2</td>
</tr>
<tr>
<td>InternVL2.5-38B</td>
<td><u>84.6</u></td>
<td>74.2</td>
<td><u>65.9</u></td>
<td><u>74.4</u></td>
<td><u>55.0</u></td>
<td>14.7</td>
<td>39.9</td>
<td>58.4</td>
</tr>
<tr>
<td>InternVL3-38B</td>
<td>83.8</td>
<td>73.2</td>
<td>64.9</td>
<td>73.5</td>
<td>54.6</td>
<td><u>16.0</u></td>
<td><strong>42.5</strong></td>
<td><u>58.4</u></td>
</tr>
<tr>
<td><strong>Lingshu-32B</strong></td>
<td><strong>84.7</strong></td>
<td><strong>77.8</strong></td>
<td><strong>66.1</strong></td>
<td><strong>74.7</strong></td>
<td><strong>65.4</strong></td>
<td><strong>22.7</strong></td>
<td><u>41.1</u></td>
<td><strong>61.8</strong></td>
</tr>
</tbody>
</table>
#### Medical Report Generation
<table>
<thead>
<tr>
<th rowspan="3">Models</th>
<th colspan="5">MIMIC-CXR</th>
<th colspan="5">CheXpert Plus</th>
<th colspan="5">IU-Xray</th>
</tr>
<tr>
<th>ROUGE-L</th>
<th>CIDEr</th>
<th>RaTE</th>
<th>SembScore</th>
<th>RadCliQ-v1<sup>-1</sup></th>
<th>ROUGE-L</th>
<th>CIDEr</th>
<th>RaTE</th>
<th>SembScore</th>
<th>RadCliQ-v1<sup>-1</sup></th>
<th>ROUGE-L</th>
<th>CIDEr</th>
<th>RaTE</th>
<th>SembScore</th>
<th>RadCliQ-v1<sup>-1</sup></th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="16" style="text-align:center;"><strong>Proprietary Models</strong></td>
</tr>
<tr>
<td>GPT-4.1</td>
<td>9.0</td>
<td>82.8</td>
<td>51.3</td>
<td>23.9</td>
<td>57.1</td>
<td>24.5</td>
<td>78.8</td>
<td>45.5</td>
<td>23.2</td>
<td>45.5</td>
<td>30.2</td>
<td>124.6</td>
<td>51.3</td>
<td>47.5</td>
<td>80.3</td>
</tr>
<tr>
<td>Claude Sonnet 4</td>
<td>20.0</td>
<td>56.6</td>
<td>45.6</td>
<td>19.7</td>
<td>53.4</td>
<td>22.0</td>
<td>59.5</td>
<td>43.5</td>
<td>18.9</td>
<td>43.3</td>
<td>25.4</td>
<td>88.3</td>
<td>55.4</td>
<td>41.0</td>
<td>72.1</td>
</tr>
<tr>
<td>Gemini-2.5-Flash</td>
<td>25.4</td>
<td>80.7</td>
<td>50.3</td>
<td>29.7</td>
<td>59.4</td>
<td>23.6</td>
<td>72.2</td>
<td>44.3</td>
<td>27.4</td>
<td>44.0</td>
<td>33.5</td>
<td>129.3</td>
<td>55.6</td>
<td>50.9</td>
<td>91.6</td>
</tr>
<tr>
<td colspan="16" style="text-align:center;"><strong>Open-source Models (<10B)</strong></td>
</tr>
<tr>
<td>Med-R1-2B</td>
<td>19.3</td>
<td>35.4</td>
<td>40.6</td>
<td>14.8</td>
<td>42.4</td>
<td>18.6</td>
<td>37.1</td>
<td>38.5</td>
<td>17.8</td>
<td>37.6</td>
<td>16.1</td>
<td>38.3</td>
<td>41.4</td>
<td>12.5</td>
<td>43.6</td>
</tr>
<tr>
<td>MedVLM-R1-2B</td>
<td>20.3</td>
<td>40.1</td>
<td>41.6</td>
<td>14.2</td>
<td>48.3</td>
<td>20.9</td>
<td>43.5</td>
<td>38.9</td>
<td>15.5</td>
<td>40.9</td>
<td>22.7</td>
<td>61.1</td>
<td>46.1</td>
<td>22.7</td>
<td>54.3</td>
</tr>
<tr>
<td>MedGemma-4B-IT</td>
<td><u>25.6</u></td>
<td><u>81.0</u></td>
<td><strong>52.4</strong></td>
<td><u>29.2</u></td>
<td><u>62.9</u></td>
<td><strong>27.1</strong></td>
<td><u>79.0</u></td>
<td><strong>47.2</strong></td>
<td><strong>29.3</strong></td>
<td><u>46.6</u></td>
<td><u>30.8</u></td>
<td>103.6</td>
<td><u>57.0</u></td>
<td><u>46.8</u></td>
<td><u>86.7</u></td>
</tr>
<tr>
<td>LLaVA-Med-7B</td>
<td>15.0</td>
<td>43.4</td>
<td>12.8</td>
<td>18.3</td>
<td>52.9</td>
<td>18.4</td>
<td>45.5</td>
<td>38.8</td>
<td>23.5</td>
<td>44.0</td>
<td>18.8</td>
<td>68.2</td>
<td>40.9</td>
<td>16.0</td>
<td>58.1</td>
</tr>
<tr>
<td>HuatuoGPT-V-7B</td>
<td>23.4</td>
<td>69.5</td>
<td>48.9</td>
<td>20.0</td>
<td>48.2</td>
<td>21.3</td>
<td>64.7</td>
<td>44.2</td>
<td>19.3</td>
<td>39.4</td>
<td>29.6</td>
<td><u>104.3</u></td>
<td>52.9</td>
<td>40.7</td>
<td>63.6</td>
</tr>
<tr>
<td>BioMediX2-8B</td>
<td>20.0</td>
<td>52.8</td>
<td>44.4</td>
<td>17.7</td>
<td>53.0</td>
<td>18.1</td>
<td>47.9</td>
<td>40.8</td>
<td>21.6</td>
<td>43.3</td>
<td>19.6</td>
<td>58.8</td>
<td>40.1</td>
<td>11.6</td>
<td>53.8</td>
</tr>
<tr>
<td>Qwen2.5VL-7B</td>
<td>24.1</td>
<td>63.7</td>
<td>47.0</td>
<td>18.4</td>
<td>55.1</td>
<td>22.2</td>
<td>62.0</td>
<td>41.0</td>
<td>17.2</td>
<td>43.1</td>
<td>26.5</td>
<td>78.1</td>
<td>48.4</td>
<td>36.3</td>
<td>66.1</td>
</tr>
<tr>
<td>InternVL2.5-8B</td>
<td>23.2</td>
<td>61.8</td>
<td>47.0</td>
<td>21.0</td>
<td>56.2</td>
<td>20.6</td>
<td>58.5</td>
<td>43.1</td>
<td>19.7</td>
<td>42.7</td>
<td>24.8</td>
<td>75.4</td>
<td>51.1</td>
<td>36.7</td>
<td>67.0</td>
</tr>
<tr>
<td>InternVL3-8B</td>
<td>22.9</td>
<td>66.2</td>
<td>48.2</td>
<td>21.5</td>
<td>55.1</td>
<td>20.9</td>
<td>65.4</td>
<td>44.3</td>
<td>25.2</td>
<td>43.7</td>
<td>22.9</td>
<td>76.2</td>
<td>51.2</td>
<td>31.3</td>
<td>59.9</td>
</tr>
<tr>
<td><strong>Lingshu-7B</strong></td>
<td><strong>30.8</strong></td>
<td><strong>109.4</strong></td>
<td><u>52.1</u></td>
<td><strong>30.0</strong></td>
<td><strong>69.2</strong></td>
<td><u>26.5</u></td>
<td><strong>79.0</strong></td>
<td><u>45.4</u></td>
<td><u>26.8</u></td>
<td><strong>47.3</strong></td>
<td><strong>41.2</strong></td>
<td><strong>180.7</strong></td>
<td><strong>57.6</strong></td>
<td><strong>48.4</strong></td>
<td><strong>108.1</strong></td>
</tr>
<tr>
<td colspan="16" style="text-align:center;"><strong>Open-source Models (>10B)</strong></td>
</tr>
<tr>
<td>HealthGPT-14B</td>
<td>21.4</td>
<td>64.7</td>
<td>48.4</td>
<td>16.5</td>
<td>52.7</td>
<td>20.6</td>
<td><u>66.2</u></td>
<td><u>44.4</u></td>
<td>22.7</td>
<td>42.6</td>
<td>22.9</td>
<td>81.9</td>
<td>50.8</td>
<td>16.6</td>
<td>56.9</td>
</tr>
<tr>
<td>HuatuoGPT-V-34B</td>
<td><u>23.5</u></td>
<td><u>68.5</u></td>
<td>48.5</td>
<td><u>23.0</u></td>
<td>47.1</td>
<td>22.5</td>
<td>62.8</td>
<td>42.9</td>
<td>22.1</td>
<td>39.7</td>
<td>28.2</td>
<td><u>108.3</u></td>
<td>54.4</td>
<td><u>42.2</u></td>
<td>59.3</td>
</tr>
<tr>
<td>MedDr-40B</td>
<td>15.7</td>
<td>62.3</td>
<td>45.2</td>
<td>12.2</td>
<td>47.0</td>
<td><u>24.1</u></td>
<td>66.1</td>
<td><strong>44.7</strong></td>
<td><u>24.2</u></td>
<td>44.7</td>
<td>19.4</td>
<td>62.9</td>
<td>40.3</td>
<td>7.3</td>
<td>48.9</td>
</tr>
<tr>
<td>InternVL3-14B</td>
<td>22.0</td>
<td>63.7</td>
<td><u>48.6</u></td>
<td>17.4</td>
<td>46.5</td>
<td>20.4</td>
<td>60.2</td>
<td>44.1</td>
<td>20.7</td>
<td>39.4</td>
<td>24.8</td>
<td>93.7</td>
<td><u>55.0</u></td>
<td>38.7</td>
<td>55.0</td>
</tr>
<tr>
<td>Qwen2.5VL-32B</td>
<td>15.7</td>
<td>50.2</td>
<td>47.5</td>
<td>17.1</td>
<td>45.2</td>
<td>15.2</td>
<td>54.8</td>
<td>43.4</td>
<td>18.5</td>
<td>40.3</td>
<td>18.9</td>
<td>73.3</td>
<td>51.3</td>
<td>38.1</td>
<td>54.0</td>
</tr>
<tr>
<td>InternVL2.5-38B</td>
<td>22.7</td>
<td>61.4</td>
<td>47.5</td>
<td>18.2</td>
<td><u>54.9</u></td>
<td>21.6</td>
<td>60.6</td>
<td>42.6</td>
<td>20.3</td>
<td><u>45.4</u></td>
<td><u>28.9</u></td>
<td>96.5</td>
<td>53.5</td>
<td>38.5</td>
<td><u>69.7</u></td>
</tr>
<tr>
<td>InternVL3-38B</td>
<td>22.8</td>
<td>64.6</td>
<td>47.9</td>
<td>18.1</td>
<td>47.2</td>
<td>20.5</td>
<td>62.7</td>
<td>43.8</td>
<td>20.2</td>
<td>39.4</td>
<td>25.5</td>
<td>90.7</td>
<td>53.5</td>
<td>33.1</td>
<td>55.2</td>
</tr>
<tr>
<td><strong>Lingshu-32B</strong></td>
<td><strong>28.8</strong></td>
<td><strong>96.4</strong></td>
<td><strong>50.8</strong></td>
<td><strong>30.1</strong></td>
<td><strong>67.1</strong></td>
<td><strong>25.3</strong></td>
<td><strong>75.9</strong></td>
<td>43.4</td>
<td><strong>24.2</strong></td>
<td><strong>47.1</strong></td>
<td><strong>42.8</strong></td>
<td><strong>189.2</strong></td>
<td><strong>63.5</strong></td>
<td><strong>54.6</strong></td>
<td><strong>130.4</strong></td>
</tr>
</tbody>
</table>
### Usage
#### Using transformers
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"lingshu-medical-mllm/Lingshu-32B",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
processor = AutoProcessor.from_pretrained("lingshu-medical-mllm/Lingshu-32B")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "example.png",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(model.device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
#### Using vLLM
```python
from vllm import LLM, SamplingParams
from qwen_vl_utils import process_vision_info
import PIL
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("lingshu-medical-mllm/Lingshu-32B")
llm = LLM(model="lingshu-medical-mllm/Lingshu-32B", limit_mm_per_prompt = {"image": 4}, tensor_parallel_size=2, enforce_eager=True, trust_remote_code=True,)
sampling_params = SamplingParams(
temperature=0.7,
top_p=1,
repetition_penalty=1,
max_tokens=1024,
stop_token_ids=[],
)
text = "What does the image show?"
image_path = "example.png"
image = PIL.Image.open(image_path)
message = [
{
"role":"user",
"content":[
{"type":"image","image":image},
{"type":"text","text":text}
]
}
]
prompt = processor.apply_chat_template(
message,
tokenize=False,
add_generation_prompt=True,
)
image_inputs, video_inputs = process_vision_info(message)
mm_data = {}
mm_data["image"] = image_inputs
processed_input = {
"prompt": prompt,
"multi_modal_data": mm_data,
}
outputs = llm.generate([processed_input], sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
```
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows:
* `*` are equal contributions. `^` are corresponding authors.
```
@article{xu2025lingshu,
title={Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning},
author={Xu, Weiwen and Chan, Hou Pong and Li, Long and Aljunied, Mahani and Yuan, Ruifeng and Wang, Jianyu and Xiao, Chenghao and Chen, Guizhen and Liu, Chaoqun and Li, Zhaodonghui and others},
journal={arXiv preprint arXiv:2506.07044},
year={2025}
}
```
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">๐ If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
๐ [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
๐ฌ **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What Iโm Testing**
Iโm pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
๐ก **TestLLM** โ Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- โ
**Zero-configuration setup**
- โณ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- ๐ง **Help wanted!** If youโre into **edge-device AI**, letโs collaborate!
### **Other Assistants**
๐ข **TurboLLM** โ Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
๐ต **HugLLM** โ Latest Open-source models:
- ๐ Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### ๐ก **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! ๐
|
19uez/llama3_2_3B_128_005_5k_GRPO_GGUF | 19uez | 2025-06-23T18:41:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:19uez/llama3_2_3B_128_005_5k_GRPO_full_model",
"base_model:quantized:19uez/llama3_2_3B_128_005_5k_GRPO_full_model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T18:39:42Z | ---
base_model: 19uez/llama3_2_3B_128_005_5k_GRPO_full_model
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 19uez
- **License:** apache-2.0
- **Finetuned from model :** 19uez/llama3_2_3B_128_005_5k_GRPO_full_model
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lanalove022/ohwxwoman | lanalove022 | 2025-06-23T18:39:29Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-23T17:41:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
goodcasper/see_ai_rt-detr_r50_4090_only_bbox_da | goodcasper | 2025-06-23T18:39:24Z | 62 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"rt_detr",
"object-detection",
"generated_from_trainer",
"base_model:PekingU/rtdetr_r50vd_coco_o365",
"base_model:finetune:PekingU/rtdetr_r50vd_coco_o365",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-06-22T17:06:34Z | ---
library_name: transformers
license: apache-2.0
base_model: PekingU/rtdetr_r50vd_coco_o365
tags:
- generated_from_trainer
model-index:
- name: see_ai_rt-detr_r50_4090_only_bbox_da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# see_ai_rt-detr_r50_4090_only_bbox_da
This model is a fine-tuned version of [PekingU/rtdetr_r50vd_coco_o365](https://huggingface.co/PekingU/rtdetr_r50vd_coco_o365) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 17.4908
- Map: 0.2719
- Map 50: 0.4767
- Map 75: 0.2675
- Map Small: 0.0014
- Map Medium: 0.1362
- Map Large: 0.2914
- Mar 1: 0.3967
- Mar 10: 0.5302
- Mar 100: 0.5565
- Mar Small: 0.25
- Mar Medium: 0.286
- Mar Large: 0.5905
- Map Angiodysplasia: 0.1248
- Mar 100 Angiodysplasia: 0.4745
- Map Erosion: 0.2196
- Mar 100 Erosion: 0.4431
- Map Stenosis: 0.3631
- Mar 100 Stenosis: 0.8125
- Map Lymphangiectasia: 0.2679
- Mar 100 Lymphangiectasia: 0.46
- Map Lymph follicle: 0.1464
- Mar 100 Lymph follicle: 0.3646
- Map Smt: 0.3574
- Mar 100 Smt: 0.6607
- Map Polyp-like: 0.3597
- Mar 100 Polyp-like: 0.5619
- Map Bleeding: 0.3614
- Mar 100 Bleeding: 0.7
- Map Diverticulum: 0.0054
- Mar 100 Diverticulum: 0.3
- Map Erythema: 0.183
- Mar 100 Erythema: 0.6854
- Map Foreign body: 0.3705
- Mar 100 Foreign body: 0.564
- Map Vein: 0.5042
- Mar 100 Vein: 0.6511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 75
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Angiodysplasia | Mar 100 Angiodysplasia | Map Erosion | Mar 100 Erosion | Map Stenosis | Mar 100 Stenosis | Map Lymphangiectasia | Mar 100 Lymphangiectasia | Map Lymph follicle | Mar 100 Lymph follicle | Map Smt | Mar 100 Smt | Map Polyp-like | Mar 100 Polyp-like | Map Bleeding | Mar 100 Bleeding | Map Diverticulum | Mar 100 Diverticulum | Map Erythema | Mar 100 Erythema | Map Foreign body | Mar 100 Foreign body | Map Vein | Mar 100 Vein |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------------:|:----------------------:|:-----------:|:---------------:|:------------:|:----------------:|:--------------------:|:------------------------:|:------------------:|:----------------------:|:-------:|:-----------:|:--------------:|:------------------:|:------------:|:----------------:|:----------------:|:--------------------:|:------------:|:----------------:|:----------------:|:--------------------:|:--------:|:------------:|
| 36.7375 | 1.0 | 2464 | 17.4288 | 0.0793 | 0.1612 | 0.0654 | 0.0249 | 0.0567 | 0.0861 | 0.2805 | 0.4859 | 0.5824 | 0.0481 | 0.4659 | 0.6207 | 0.0286 | 0.3955 | 0.1208 | 0.563 | 0.0509 | 0.6769 | 0.0299 | 0.5531 | 0.097 | 0.4817 | 0.0591 | 0.7275 | 0.2281 | 0.6115 | 0.0509 | 0.7074 | 0.0007 | 0.4667 | 0.0493 | 0.7009 | 0.1436 | 0.55 | 0.0924 | 0.5544 |
| 20.3177 | 2.0 | 4928 | 16.0153 | 0.1551 | 0.3063 | 0.1325 | 0.0243 | 0.1101 | 0.1722 | 0.3193 | 0.4999 | 0.5844 | 0.0667 | 0.4744 | 0.6213 | 0.0769 | 0.4618 | 0.1706 | 0.568 | 0.1688 | 0.7615 | 0.0358 | 0.575 | 0.1216 | 0.4798 | 0.1063 | 0.6843 | 0.2509 | 0.62 | 0.28 | 0.6975 | 0.0003 | 0.3333 | 0.0567 | 0.6982 | 0.2392 | 0.5748 | 0.3544 | 0.5583 |
| 17.8371 | 3.0 | 7392 | 15.7262 | 0.186 | 0.3547 | 0.1639 | 0.0306 | 0.1322 | 0.2062 | 0.3141 | 0.5003 | 0.5753 | 0.0519 | 0.4188 | 0.6184 | 0.0856 | 0.4225 | 0.2004 | 0.539 | 0.234 | 0.7712 | 0.042 | 0.5172 | 0.1217 | 0.462 | 0.2314 | 0.7275 | 0.2771 | 0.6146 | 0.3097 | 0.7148 | 0.0002 | 0.2333 | 0.0872 | 0.7411 | 0.2792 | 0.5824 | 0.3638 | 0.5777 |
| 16.53 | 4.0 | 9856 | 16.0175 | 0.1864 | 0.3703 | 0.1605 | 0.0276 | 0.1168 | 0.207 | 0.3069 | 0.5254 | 0.6018 | 0.0624 | 0.4752 | 0.6355 | 0.1053 | 0.4303 | 0.1896 | 0.5507 | 0.2278 | 0.8115 | 0.0383 | 0.5406 | 0.1227 | 0.4777 | 0.1999 | 0.6765 | 0.3009 | 0.6214 | 0.2885 | 0.7358 | 0.0037 | 0.45 | 0.0936 | 0.733 | 0.323 | 0.5954 | 0.3442 | 0.599 |
| 15.6505 | 5.0 | 12320 | 16.0482 | 0.2082 | 0.3992 | 0.1859 | 0.0486 | 0.1299 | 0.2247 | 0.3306 | 0.5355 | 0.6075 | 0.1354 | 0.4619 | 0.643 | 0.097 | 0.4674 | 0.2165 | 0.5676 | 0.3178 | 0.7885 | 0.0921 | 0.6094 | 0.1228 | 0.4956 | 0.3136 | 0.7529 | 0.3128 | 0.618 | 0.2606 | 0.7012 | 0.0005 | 0.35 | 0.0621 | 0.7188 | 0.3309 | 0.6029 | 0.372 | 0.6175 |
| 14.9889 | 6.0 | 14784 | 16.0843 | 0.2026 | 0.3928 | 0.1755 | 0.0399 | 0.1462 | 0.2216 | 0.3519 | 0.5283 | 0.6163 | 0.0905 | 0.498 | 0.6523 | 0.0892 | 0.4719 | 0.2386 | 0.5612 | 0.3312 | 0.7769 | 0.081 | 0.6266 | 0.129 | 0.487 | 0.2279 | 0.7627 | 0.3235 | 0.6132 | 0.267 | 0.7333 | 0.0031 | 0.4333 | 0.0819 | 0.7018 | 0.3317 | 0.5899 | 0.3276 | 0.6379 |
| 14.4486 | 7.0 | 17248 | 16.2556 | 0.2243 | 0.4179 | 0.211 | 0.0386 | 0.1455 | 0.2445 | 0.3473 | 0.5382 | 0.609 | 0.0931 | 0.4777 | 0.6496 | 0.1027 | 0.4787 | 0.2391 | 0.5612 | 0.4337 | 0.7885 | 0.0606 | 0.6062 | 0.1317 | 0.4634 | 0.2658 | 0.7569 | 0.3302 | 0.5927 | 0.2732 | 0.7099 | 0.0033 | 0.4167 | 0.0903 | 0.7286 | 0.3465 | 0.5798 | 0.4139 | 0.6252 |
| 13.9808 | 8.0 | 19712 | 16.2233 | 0.2261 | 0.4265 | 0.2116 | 0.0225 | 0.1453 | 0.2468 | 0.3696 | 0.5456 | 0.628 | 0.0905 | 0.4871 | 0.6653 | 0.1063 | 0.4663 | 0.24 | 0.5639 | 0.3817 | 0.7846 | 0.1127 | 0.6203 | 0.1294 | 0.4836 | 0.3068 | 0.7902 | 0.3259 | 0.6259 | 0.2842 | 0.7185 | 0.0106 | 0.5333 | 0.0933 | 0.7214 | 0.3356 | 0.5857 | 0.3863 | 0.6417 |
| 13.6172 | 9.0 | 22176 | 16.6177 | 0.219 | 0.4109 | 0.1995 | 0.035 | 0.1578 | 0.2383 | 0.3691 | 0.548 | 0.6113 | 0.0995 | 0.4822 | 0.6452 | 0.0883 | 0.5 | 0.2331 | 0.5584 | 0.3703 | 0.7904 | 0.0913 | 0.5984 | 0.1505 | 0.4833 | 0.274 | 0.8137 | 0.3025 | 0.6025 | 0.3182 | 0.7 | 0.0084 | 0.4167 | 0.0716 | 0.6973 | 0.3243 | 0.5576 | 0.3958 | 0.6175 |
| 13.2263 | 10.0 | 24640 | 16.8541 | 0.2235 | 0.4183 | 0.2065 | 0.038 | 0.1459 | 0.2459 | 0.3698 | 0.5526 | 0.616 | 0.0873 | 0.474 | 0.6576 | 0.0837 | 0.491 | 0.2204 | 0.5383 | 0.3477 | 0.7808 | 0.0893 | 0.6172 | 0.1206 | 0.4823 | 0.3469 | 0.8039 | 0.3296 | 0.5972 | 0.2981 | 0.7185 | 0.0129 | 0.4833 | 0.1219 | 0.6964 | 0.3233 | 0.542 | 0.388 | 0.6408 |
| 12.9138 | 11.0 | 27104 | 16.9201 | 0.2179 | 0.4082 | 0.1933 | 0.0321 | 0.1346 | 0.2426 | 0.3684 | 0.5446 | 0.6176 | 0.0735 | 0.4892 | 0.6578 | 0.0951 | 0.4888 | 0.2127 | 0.55 | 0.3637 | 0.8038 | 0.07 | 0.5953 | 0.1103 | 0.4666 | 0.3145 | 0.7137 | 0.315 | 0.5918 | 0.2899 | 0.737 | 0.0017 | 0.55 | 0.1163 | 0.7259 | 0.3202 | 0.5529 | 0.4052 | 0.635 |
| 12.6214 | 12.0 | 29568 | 16.9798 | 0.2248 | 0.4284 | 0.2014 | 0.0265 | 0.1331 | 0.2493 | 0.3567 | 0.5339 | 0.5992 | 0.0852 | 0.4716 | 0.6401 | 0.0849 | 0.4449 | 0.2083 | 0.5386 | 0.3916 | 0.7962 | 0.1846 | 0.6078 | 0.1425 | 0.4685 | 0.2674 | 0.7529 | 0.3087 | 0.5825 | 0.2753 | 0.7123 | 0.0027 | 0.4333 | 0.1171 | 0.7 | 0.3215 | 0.5391 | 0.3933 | 0.6146 |
| 12.3319 | 13.0 | 32032 | 17.3638 | 0.2203 | 0.4075 | 0.1991 | 0.0268 | 0.1306 | 0.2429 | 0.3494 | 0.5344 | 0.5951 | 0.063 | 0.4347 | 0.6398 | 0.0752 | 0.4764 | 0.214 | 0.5246 | 0.3817 | 0.8019 | 0.0994 | 0.5531 | 0.1169 | 0.4408 | 0.3181 | 0.7529 | 0.3079 | 0.5668 | 0.296 | 0.721 | 0.0029 | 0.4667 | 0.1283 | 0.6946 | 0.307 | 0.5244 | 0.3956 | 0.6175 |
| 12.0997 | 14.0 | 34496 | 17.3007 | 0.2159 | 0.4033 | 0.1931 | 0.034 | 0.1221 | 0.2366 | 0.3544 | 0.5315 | 0.5978 | 0.0677 | 0.4412 | 0.6368 | 0.0715 | 0.4326 | 0.1871 | 0.524 | 0.4016 | 0.8077 | 0.0704 | 0.5547 | 0.1378 | 0.466 | 0.2739 | 0.7235 | 0.3052 | 0.5561 | 0.2765 | 0.7025 | 0.002 | 0.5333 | 0.1458 | 0.6982 | 0.3191 | 0.5685 | 0.4002 | 0.6068 |
| 11.8886 | 15.0 | 36960 | 17.3297 | 0.2179 | 0.407 | 0.2028 | 0.0337 | 0.1357 | 0.2392 | 0.3508 | 0.5323 | 0.5952 | 0.0921 | 0.4713 | 0.6312 | 0.0598 | 0.4539 | 0.2118 | 0.5044 | 0.4114 | 0.8115 | 0.0653 | 0.5906 | 0.1455 | 0.4533 | 0.2476 | 0.7686 | 0.3284 | 0.5656 | 0.2914 | 0.6889 | 0.0011 | 0.45 | 0.1148 | 0.6759 | 0.3296 | 0.5571 | 0.4081 | 0.6223 |
| 11.6726 | 16.0 | 39424 | 17.3828 | 0.2207 | 0.4132 | 0.2076 | 0.0163 | 0.1598 | 0.2383 | 0.3757 | 0.5485 | 0.595 | 0.0444 | 0.3999 | 0.6425 | 0.076 | 0.4416 | 0.2027 | 0.5409 | 0.4073 | 0.7942 | 0.1127 | 0.5297 | 0.1192 | 0.4213 | 0.281 | 0.7627 | 0.3042 | 0.5169 | 0.3189 | 0.7025 | 0.0039 | 0.65 | 0.1225 | 0.6732 | 0.2836 | 0.4929 | 0.4167 | 0.6146 |
| 11.4876 | 17.0 | 41888 | 17.4328 | 0.2231 | 0.4086 | 0.2195 | 0.0222 | 0.1354 | 0.2419 | 0.3683 | 0.5259 | 0.5772 | 0.0714 | 0.4245 | 0.6192 | 0.0753 | 0.4438 | 0.2135 | 0.5135 | 0.4319 | 0.7981 | 0.098 | 0.5609 | 0.1229 | 0.4523 | 0.2941 | 0.7451 | 0.3087 | 0.5682 | 0.3022 | 0.6975 | 0.0011 | 0.35 | 0.1193 | 0.6446 | 0.3166 | 0.5315 | 0.3935 | 0.6214 |
| 11.2952 | 18.0 | 44352 | 17.6459 | 0.221 | 0.4029 | 0.2026 | 0.0232 | 0.1308 | 0.2459 | 0.3631 | 0.5436 | 0.5922 | 0.0751 | 0.4237 | 0.6314 | 0.0589 | 0.4427 | 0.1924 | 0.5231 | 0.3853 | 0.7846 | 0.0631 | 0.525 | 0.1373 | 0.4316 | 0.2884 | 0.7529 | 0.2957 | 0.5417 | 0.3202 | 0.7025 | 0.0134 | 0.5833 | 0.1616 | 0.6554 | 0.2967 | 0.5235 | 0.4387 | 0.6398 |
| 11.1206 | 19.0 | 46816 | 17.6613 | 0.2119 | 0.3963 | 0.1975 | 0.0167 | 0.1306 | 0.2352 | 0.3559 | 0.5226 | 0.5686 | 0.0296 | 0.4262 | 0.6093 | 0.0605 | 0.4157 | 0.1994 | 0.5062 | 0.3572 | 0.7788 | 0.0695 | 0.5266 | 0.1262 | 0.4247 | 0.2347 | 0.7412 | 0.3235 | 0.5437 | 0.323 | 0.7 | 0.0201 | 0.3833 | 0.1301 | 0.6786 | 0.2987 | 0.5261 | 0.4001 | 0.5981 |
| 10.9505 | 20.0 | 49280 | 17.7590 | 0.2269 | 0.4161 | 0.2232 | 0.0132 | 0.1455 | 0.2464 | 0.3639 | 0.5183 | 0.5646 | 0.0466 | 0.439 | 0.6052 | 0.0639 | 0.4258 | 0.1908 | 0.4833 | 0.4264 | 0.7923 | 0.1155 | 0.5641 | 0.1078 | 0.4096 | 0.2965 | 0.7608 | 0.3221 | 0.5434 | 0.2961 | 0.6951 | 0.0394 | 0.3 | 0.1535 | 0.6607 | 0.3133 | 0.5399 | 0.3977 | 0.6 |
| 10.7857 | 21.0 | 51744 | 18.1314 | 0.2207 | 0.4115 | 0.2129 | 0.0325 | 0.1255 | 0.2482 | 0.3641 | 0.524 | 0.5674 | 0.037 | 0.4027 | 0.6057 | 0.0551 | 0.436 | 0.1841 | 0.4648 | 0.4 | 0.8077 | 0.123 | 0.5453 | 0.1363 | 0.429 | 0.2892 | 0.698 | 0.2919 | 0.5132 | 0.3189 | 0.6691 | 0.0032 | 0.4667 | 0.1499 | 0.6402 | 0.3061 | 0.5395 | 0.3905 | 0.599 |
| 10.629 | 22.0 | 54208 | 17.9638 | 0.2257 | 0.4097 | 0.215 | 0.0378 | 0.1458 | 0.2467 | 0.3662 | 0.5151 | 0.5618 | 0.0704 | 0.4085 | 0.6055 | 0.0647 | 0.4494 | 0.1818 | 0.4815 | 0.4016 | 0.7904 | 0.1436 | 0.5609 | 0.1299 | 0.4263 | 0.3334 | 0.7431 | 0.3007 | 0.5307 | 0.2869 | 0.6778 | 0.0077 | 0.3 | 0.1911 | 0.6857 | 0.2945 | 0.5185 | 0.3718 | 0.5777 |
| 10.4715 | 23.0 | 56672 | 17.6242 | 0.2303 | 0.4311 | 0.2137 | 0.0174 | 0.1342 | 0.2574 | 0.3819 | 0.5384 | 0.583 | 0.0667 | 0.3883 | 0.626 | 0.065 | 0.3921 | 0.1974 | 0.4947 | 0.4398 | 0.8058 | 0.143 | 0.5125 | 0.1272 | 0.4226 | 0.3159 | 0.7275 | 0.3216 | 0.5532 | 0.2732 | 0.684 | 0.0088 | 0.6333 | 0.177 | 0.6741 | 0.2948 | 0.5029 | 0.4005 | 0.5932 |
| 10.3338 | 24.0 | 59136 | 18.1177 | 0.2262 | 0.4256 | 0.2067 | 0.0258 | 0.1321 | 0.2489 | 0.3616 | 0.5109 | 0.55 | 0.037 | 0.3944 | 0.5887 | 0.0541 | 0.3944 | 0.1753 | 0.4367 | 0.4345 | 0.7981 | 0.1748 | 0.4891 | 0.1066 | 0.4025 | 0.3387 | 0.7294 | 0.2987 | 0.5107 | 0.2939 | 0.6728 | 0.0044 | 0.4333 | 0.1924 | 0.6455 | 0.2576 | 0.4996 | 0.3835 | 0.5874 |
| 10.1916 | 25.0 | 61600 | 17.8994 | 0.2282 | 0.4232 | 0.2116 | 0.0247 | 0.1446 | 0.2506 | 0.3693 | 0.5227 | 0.5596 | 0.063 | 0.3813 | 0.6015 | 0.0732 | 0.3787 | 0.186 | 0.466 | 0.3678 | 0.8135 | 0.1846 | 0.5516 | 0.105 | 0.4038 | 0.3647 | 0.7314 | 0.3092 | 0.5124 | 0.3004 | 0.658 | 0.0166 | 0.45 | 0.1428 | 0.642 | 0.2905 | 0.5277 | 0.3981 | 0.5796 |
| 10.0803 | 26.0 | 64064 | 17.5562 | 0.2306 | 0.4373 | 0.2075 | 0.017 | 0.1563 | 0.2545 | 0.3493 | 0.5088 | 0.551 | 0.0556 | 0.416 | 0.591 | 0.0507 | 0.4146 | 0.2201 | 0.4649 | 0.4081 | 0.8058 | 0.2054 | 0.5391 | 0.162 | 0.421 | 0.2929 | 0.7863 | 0.3145 | 0.5366 | 0.2611 | 0.684 | 0.0012 | 0.1833 | 0.1509 | 0.6652 | 0.2963 | 0.5155 | 0.4045 | 0.5951 |
| 9.9708 | 27.0 | 66528 | 17.9700 | 0.225 | 0.4194 | 0.2113 | 0.0181 | 0.1326 | 0.2503 | 0.3612 | 0.5003 | 0.5353 | 0.0407 | 0.3732 | 0.579 | 0.0628 | 0.373 | 0.1765 | 0.434 | 0.3772 | 0.8115 | 0.2106 | 0.5016 | 0.1366 | 0.4018 | 0.2915 | 0.7 | 0.3158 | 0.5321 | 0.2934 | 0.6506 | 0.0059 | 0.3 | 0.1515 | 0.6545 | 0.2781 | 0.5 | 0.4002 | 0.5641 |
| 9.854 | 28.0 | 68992 | 18.0467 | 0.237 | 0.4345 | 0.2287 | 0.0142 | 0.147 | 0.262 | 0.3641 | 0.4925 | 0.5252 | 0.0407 | 0.3728 | 0.5611 | 0.0628 | 0.3101 | 0.1745 | 0.4308 | 0.4756 | 0.8019 | 0.2357 | 0.5109 | 0.1288 | 0.3835 | 0.3471 | 0.7216 | 0.2968 | 0.4961 | 0.2537 | 0.6383 | 0.0028 | 0.2833 | 0.1737 | 0.6187 | 0.2863 | 0.5244 | 0.4058 | 0.5825 |
| 9.7489 | 29.0 | 71456 | 17.6774 | 0.2387 | 0.4427 | 0.2208 | 0.0088 | 0.1536 | 0.2596 | 0.3649 | 0.5019 | 0.5401 | 0.0481 | 0.3907 | 0.576 | 0.0676 | 0.3753 | 0.1884 | 0.4681 | 0.4043 | 0.8019 | 0.2346 | 0.5484 | 0.1473 | 0.4097 | 0.3545 | 0.7059 | 0.306 | 0.4986 | 0.325 | 0.6617 | 0.0033 | 0.2833 | 0.1348 | 0.633 | 0.2953 | 0.521 | 0.4036 | 0.5738 |
| 9.635 | 30.0 | 73920 | 17.8968 | 0.2251 | 0.4255 | 0.2137 | 0.02 | 0.1438 | 0.2494 | 0.355 | 0.493 | 0.526 | 0.0556 | 0.3609 | 0.5651 | 0.057 | 0.3236 | 0.1929 | 0.4391 | 0.3518 | 0.7692 | 0.2207 | 0.5203 | 0.1317 | 0.3852 | 0.3017 | 0.7176 | 0.3076 | 0.5107 | 0.2939 | 0.6519 | 0.0007 | 0.2667 | 0.1519 | 0.642 | 0.2864 | 0.5105 | 0.4052 | 0.5748 |
| 9.5233 | 31.0 | 76384 | 18.0248 | 0.2168 | 0.4154 | 0.1972 | 0.0315 | 0.1371 | 0.2386 | 0.3649 | 0.5082 | 0.5408 | 0.0444 | 0.3672 | 0.5804 | 0.0582 | 0.3202 | 0.2163 | 0.4427 | 0.3304 | 0.7635 | 0.2273 | 0.5031 | 0.1432 | 0.4 | 0.268 | 0.6863 | 0.2917 | 0.5124 | 0.2976 | 0.658 | 0.0039 | 0.55 | 0.1237 | 0.6286 | 0.2705 | 0.4773 | 0.3705 | 0.5476 |
| 9.4036 | 32.0 | 78848 | 18.2855 | 0.2297 | 0.4319 | 0.2113 | 0.021 | 0.1516 | 0.2505 | 0.3649 | 0.4933 | 0.525 | 0.0444 | 0.378 | 0.5602 | 0.0857 | 0.3483 | 0.1821 | 0.4226 | 0.3646 | 0.7962 | 0.25 | 0.5078 | 0.1364 | 0.366 | 0.2999 | 0.698 | 0.3041 | 0.4808 | 0.3018 | 0.6778 | 0.0033 | 0.3167 | 0.1754 | 0.6268 | 0.2802 | 0.5059 | 0.3722 | 0.5534 |
| 9.3106 | 33.0 | 81312 | 18.3123 | 0.226 | 0.4156 | 0.2128 | 0.0071 | 0.1517 | 0.2513 | 0.35 | 0.4849 | 0.5177 | 0.0296 | 0.384 | 0.5542 | 0.0662 | 0.3079 | 0.1798 | 0.4066 | 0.3859 | 0.7788 | 0.2607 | 0.5031 | 0.1216 | 0.3675 | 0.2769 | 0.7157 | 0.3122 | 0.5051 | 0.3001 | 0.6494 | 0.0009 | 0.2833 | 0.1659 | 0.6134 | 0.2873 | 0.5218 | 0.3551 | 0.5602 |
| 9.2202 | 34.0 | 83776 | 17.9811 | 0.2406 | 0.4421 | 0.2314 | 0.0261 | 0.1501 | 0.2635 | 0.3669 | 0.4963 | 0.5255 | 0.0492 | 0.3794 | 0.5629 | 0.0736 | 0.3404 | 0.2004 | 0.427 | 0.4003 | 0.7885 | 0.2564 | 0.4969 | 0.1352 | 0.3674 | 0.3663 | 0.702 | 0.3228 | 0.5017 | 0.296 | 0.6654 | 0.0015 | 0.35 | 0.1508 | 0.6286 | 0.278 | 0.4891 | 0.4056 | 0.5485 |
| 9.1235 | 35.0 | 86240 | 18.1391 | 0.2334 | 0.4348 | 0.2164 | 0.0136 | 0.1584 | 0.2589 | 0.3611 | 0.4928 | 0.5222 | 0.0492 | 0.3608 | 0.5627 | 0.0919 | 0.3247 | 0.1922 | 0.423 | 0.3403 | 0.7769 | 0.267 | 0.5063 | 0.1311 | 0.3742 | 0.32 | 0.7137 | 0.3139 | 0.5079 | 0.2973 | 0.637 | 0.0148 | 0.3667 | 0.1643 | 0.6009 | 0.2823 | 0.4756 | 0.386 | 0.5592 |
| 9.0386 | 36.0 | 88704 | 18.2431 | 0.2358 | 0.4347 | 0.2275 | 0.0154 | 0.1486 | 0.2604 | 0.3655 | 0.4861 | 0.5143 | 0.0455 | 0.3666 | 0.5536 | 0.0657 | 0.3146 | 0.1886 | 0.4133 | 0.4145 | 0.7865 | 0.2599 | 0.4906 | 0.1247 | 0.3507 | 0.3053 | 0.7255 | 0.3016 | 0.4789 | 0.3202 | 0.6481 | 0.0132 | 0.3333 | 0.1583 | 0.5732 | 0.2863 | 0.4853 | 0.391 | 0.5718 |
| 8.935 | 37.0 | 91168 | 18.1761 | 0.2311 | 0.4302 | 0.2174 | 0.0092 | 0.1519 | 0.2574 | 0.3617 | 0.4867 | 0.5167 | 0.037 | 0.3701 | 0.5553 | 0.0652 | 0.2978 | 0.1889 | 0.4132 | 0.3767 | 0.7577 | 0.2676 | 0.4859 | 0.1411 | 0.3715 | 0.312 | 0.7176 | 0.305 | 0.4817 | 0.2671 | 0.6494 | 0.0015 | 0.3667 | 0.1766 | 0.6375 | 0.2746 | 0.4655 | 0.3966 | 0.5553 |
| 8.8224 | 38.0 | 93632 | 17.9488 | 0.2472 | 0.4556 | 0.2285 | 0.0234 | 0.1545 | 0.2731 | 0.3474 | 0.4852 | 0.5166 | 0.0407 | 0.3463 | 0.5587 | 0.0709 | 0.3079 | 0.2007 | 0.444 | 0.4104 | 0.7731 | 0.2948 | 0.4797 | 0.1513 | 0.3799 | 0.3279 | 0.7059 | 0.3087 | 0.4808 | 0.2911 | 0.6494 | 0.0006 | 0.2833 | 0.2113 | 0.6357 | 0.2862 | 0.4937 | 0.4128 | 0.566 |
| 8.7433 | 39.0 | 96096 | 18.1127 | 0.2357 | 0.4324 | 0.2243 | 0.0235 | 0.1369 | 0.2616 | 0.3564 | 0.4798 | 0.5089 | 0.0407 | 0.343 | 0.548 | 0.071 | 0.3022 | 0.1846 | 0.4162 | 0.3954 | 0.7577 | 0.2799 | 0.5047 | 0.1232 | 0.3386 | 0.2932 | 0.702 | 0.323 | 0.5054 | 0.3176 | 0.6222 | 0.0019 | 0.2667 | 0.1799 | 0.6429 | 0.2762 | 0.4828 | 0.3821 | 0.566 |
| 8.6435 | 40.0 | 98560 | 18.0182 | 0.2409 | 0.4364 | 0.2275 | 0.0168 | 0.1467 | 0.2657 | 0.3467 | 0.466 | 0.4972 | 0.0481 | 0.3267 | 0.5367 | 0.0525 | 0.2854 | 0.1843 | 0.4091 | 0.4296 | 0.7827 | 0.2824 | 0.4953 | 0.142 | 0.3505 | 0.3375 | 0.7039 | 0.3094 | 0.4837 | 0.3214 | 0.6383 | 0.0003 | 0.1667 | 0.1753 | 0.6223 | 0.2759 | 0.4651 | 0.38 | 0.5631 |
| 8.5531 | 41.0 | 101024 | 18.2803 | 0.2239 | 0.4147 | 0.2079 | 0.0114 | 0.1387 | 0.2482 | 0.3502 | 0.4707 | 0.4972 | 0.0333 | 0.2693 | 0.5374 | 0.0416 | 0.2685 | 0.171 | 0.4117 | 0.4009 | 0.7846 | 0.2388 | 0.4734 | 0.1261 | 0.334 | 0.2777 | 0.6706 | 0.2877 | 0.4625 | 0.2865 | 0.6321 | 0.0036 | 0.3 | 0.197 | 0.6143 | 0.2791 | 0.4693 | 0.3766 | 0.5447 |
| 8.4525 | 42.0 | 103488 | 17.8710 | 0.236 | 0.4381 | 0.2212 | 0.0159 | 0.133 | 0.2638 | 0.3574 | 0.4849 | 0.509 | 0.0418 | 0.3589 | 0.55 | 0.0499 | 0.2798 | 0.1742 | 0.4171 | 0.438 | 0.7827 | 0.2867 | 0.4828 | 0.1365 | 0.3412 | 0.2918 | 0.7294 | 0.3035 | 0.5 | 0.2964 | 0.6358 | 0.0065 | 0.2833 | 0.1907 | 0.6304 | 0.2792 | 0.458 | 0.3792 | 0.568 |
| 8.3652 | 43.0 | 105952 | 18.3198 | 0.232 | 0.4284 | 0.2225 | 0.0121 | 0.143 | 0.2564 | 0.3369 | 0.4608 | 0.485 | 0.0407 | 0.3412 | 0.519 | 0.0521 | 0.2551 | 0.1652 | 0.382 | 0.402 | 0.7519 | 0.268 | 0.4922 | 0.1337 | 0.3298 | 0.3155 | 0.6647 | 0.3096 | 0.4904 | 0.3207 | 0.6444 | 0.0003 | 0.2167 | 0.1729 | 0.6 | 0.2684 | 0.4555 | 0.3756 | 0.5379 |
| 8.2996 | 44.0 | 108416 | 18.0853 | 0.2305 | 0.426 | 0.2116 | 0.0068 | 0.1437 | 0.2573 | 0.3471 | 0.4778 | 0.504 | 0.0296 | 0.3573 | 0.5393 | 0.0351 | 0.2933 | 0.1818 | 0.3947 | 0.4216 | 0.7538 | 0.2652 | 0.4609 | 0.1433 | 0.3325 | 0.3083 | 0.7294 | 0.3166 | 0.4918 | 0.2773 | 0.6074 | 0.0007 | 0.3 | 0.1635 | 0.6268 | 0.2698 | 0.4815 | 0.3829 | 0.5757 |
| 8.2003 | 45.0 | 110880 | 18.2149 | 0.2436 | 0.4412 | 0.2367 | 0.0267 | 0.1446 | 0.2703 | 0.3522 | 0.4739 | 0.5018 | 0.0529 | 0.3437 | 0.5393 | 0.0422 | 0.2742 | 0.1787 | 0.3991 | 0.4732 | 0.7788 | 0.3367 | 0.4766 | 0.1473 | 0.347 | 0.2871 | 0.702 | 0.2992 | 0.471 | 0.3106 | 0.6198 | 0.0031 | 0.3 | 0.1776 | 0.6304 | 0.2886 | 0.479 | 0.3792 | 0.5437 |
| 8.1182 | 46.0 | 113344 | 18.1238 | 0.2429 | 0.4392 | 0.2343 | 0.027 | 0.144 | 0.2676 | 0.3603 | 0.4891 | 0.5139 | 0.0407 | 0.3569 | 0.5501 | 0.0409 | 0.264 | 0.183 | 0.4084 | 0.4434 | 0.7712 | 0.2948 | 0.4828 | 0.1392 | 0.336 | 0.3265 | 0.7137 | 0.3052 | 0.4741 | 0.2991 | 0.6556 | 0.0018 | 0.4333 | 0.2239 | 0.6179 | 0.2747 | 0.4643 | 0.3828 | 0.5456 |
| 8.0393 | 47.0 | 115808 | 18.2298 | 0.2369 | 0.4248 | 0.2305 | 0.029 | 0.1484 | 0.2627 | 0.3534 | 0.4705 | 0.4935 | 0.0444 | 0.3275 | 0.5316 | 0.047 | 0.2674 | 0.1752 | 0.3881 | 0.4261 | 0.7462 | 0.2871 | 0.4812 | 0.1446 | 0.334 | 0.3008 | 0.698 | 0.3007 | 0.4752 | 0.3198 | 0.6321 | 0.0011 | 0.25 | 0.1825 | 0.6196 | 0.2781 | 0.4752 | 0.3792 | 0.5553 |
| 7.9654 | 48.0 | 118272 | 18.2678 | 0.2413 | 0.4443 | 0.231 | 0.0269 | 0.1492 | 0.2656 | 0.3486 | 0.4593 | 0.4799 | 0.0444 | 0.3418 | 0.5155 | 0.0463 | 0.2506 | 0.1914 | 0.3911 | 0.4243 | 0.7346 | 0.2905 | 0.4844 | 0.1541 | 0.3297 | 0.3161 | 0.6745 | 0.3005 | 0.458 | 0.3112 | 0.6173 | 0.0011 | 0.2 | 0.1989 | 0.617 | 0.2845 | 0.4609 | 0.3767 | 0.5408 |
| 7.8905 | 49.0 | 120736 | 18.1783 | 0.2383 | 0.4355 | 0.2229 | 0.0223 | 0.14 | 0.2641 | 0.3562 | 0.4788 | 0.5039 | 0.0481 | 0.3804 | 0.54 | 0.0699 | 0.2966 | 0.1827 | 0.4004 | 0.3947 | 0.7692 | 0.2896 | 0.4781 | 0.1559 | 0.347 | 0.3502 | 0.7196 | 0.2955 | 0.4777 | 0.2761 | 0.6247 | 0.0016 | 0.3 | 0.1653 | 0.6009 | 0.288 | 0.4672 | 0.3901 | 0.565 |
| 7.798 | 50.0 | 123200 | 18.1073 | 0.2351 | 0.4266 | 0.2281 | 0.0203 | 0.1493 | 0.2581 | 0.3534 | 0.4704 | 0.4944 | 0.0455 | 0.364 | 0.5281 | 0.0466 | 0.2607 | 0.1816 | 0.4027 | 0.4407 | 0.7615 | 0.3135 | 0.4766 | 0.1507 | 0.334 | 0.2962 | 0.7039 | 0.302 | 0.4848 | 0.2924 | 0.5988 | 0.001 | 0.3 | 0.1496 | 0.5991 | 0.2726 | 0.458 | 0.3748 | 0.5524 |
| 7.7137 | 51.0 | 125664 | 18.3562 | 0.2369 | 0.4348 | 0.2277 | 0.0244 | 0.1428 | 0.2637 | 0.3411 | 0.4516 | 0.4701 | 0.037 | 0.333 | 0.5052 | 0.0454 | 0.2528 | 0.1888 | 0.3849 | 0.4385 | 0.7673 | 0.2894 | 0.4547 | 0.1467 | 0.318 | 0.3107 | 0.6667 | 0.2966 | 0.4552 | 0.2931 | 0.5963 | 0.0003 | 0.15 | 0.1851 | 0.5982 | 0.2748 | 0.4513 | 0.3741 | 0.5456 |
| 7.6378 | 52.0 | 128128 | 18.1310 | 0.2316 | 0.4237 | 0.2153 | 0.014 | 0.1402 | 0.2578 | 0.3589 | 0.4615 | 0.4814 | 0.0407 | 0.3406 | 0.514 | 0.0431 | 0.2528 | 0.194 | 0.3941 | 0.4249 | 0.7346 | 0.2994 | 0.4578 | 0.1458 | 0.3083 | 0.2773 | 0.6647 | 0.2939 | 0.4586 | 0.2861 | 0.5963 | 0.0013 | 0.3167 | 0.1622 | 0.5929 | 0.2717 | 0.4563 | 0.3791 | 0.5437 |
| 7.5504 | 53.0 | 130592 | 18.1597 | 0.2361 | 0.432 | 0.2178 | 0.0099 | 0.1364 | 0.2627 | 0.3492 | 0.4503 | 0.4695 | 0.0333 | 0.3479 | 0.5042 | 0.0401 | 0.2483 | 0.1854 | 0.3758 | 0.4306 | 0.7442 | 0.3121 | 0.4531 | 0.1513 | 0.3165 | 0.3049 | 0.6647 | 0.3042 | 0.4693 | 0.28 | 0.6012 | 0.0009 | 0.1667 | 0.1673 | 0.5938 | 0.2769 | 0.4496 | 0.3791 | 0.5505 |
| 7.462 | 54.0 | 133056 | 18.0485 | 0.2416 | 0.4392 | 0.2287 | 0.0196 | 0.1492 | 0.268 | 0.3471 | 0.4506 | 0.468 | 0.0444 | 0.3464 | 0.503 | 0.0478 | 0.2449 | 0.1883 | 0.3843 | 0.4375 | 0.7346 | 0.2921 | 0.4594 | 0.1551 | 0.3185 | 0.3259 | 0.6686 | 0.3026 | 0.4577 | 0.2975 | 0.6062 | 0.0006 | 0.1833 | 0.1961 | 0.5946 | 0.2759 | 0.4424 | 0.3802 | 0.5214 |
| 7.3848 | 55.0 | 135520 | 18.1110 | 0.2396 | 0.4388 | 0.2202 | 0.0129 | 0.1464 | 0.2648 | 0.3357 | 0.452 | 0.4742 | 0.0407 | 0.3662 | 0.5064 | 0.0437 | 0.2494 | 0.1846 | 0.3831 | 0.4269 | 0.7462 | 0.3141 | 0.4703 | 0.1616 | 0.3303 | 0.3097 | 0.6392 | 0.2947 | 0.4617 | 0.3014 | 0.6235 | 0.0003 | 0.1833 | 0.182 | 0.6107 | 0.2848 | 0.4487 | 0.3709 | 0.5437 |
| 7.3162 | 56.0 | 137984 | 17.9914 | 0.2433 | 0.4424 | 0.2338 | 0.0096 | 0.1414 | 0.2711 | 0.3425 | 0.4576 | 0.4743 | 0.0296 | 0.3316 | 0.5091 | 0.0477 | 0.2472 | 0.1926 | 0.3872 | 0.4689 | 0.7558 | 0.2923 | 0.475 | 0.1554 | 0.326 | 0.3248 | 0.6725 | 0.2946 | 0.4566 | 0.2856 | 0.5963 | 0.0003 | 0.2 | 0.1876 | 0.5982 | 0.2754 | 0.4353 | 0.3941 | 0.5417 |
| 7.232 | 57.0 | 140448 | 18.0469 | 0.2385 | 0.4333 | 0.2214 | 0.0125 | 0.147 | 0.2651 | 0.3451 | 0.4529 | 0.473 | 0.0296 | 0.3585 | 0.506 | 0.0361 | 0.2371 | 0.188 | 0.3932 | 0.4271 | 0.7308 | 0.3078 | 0.4719 | 0.1656 | 0.3359 | 0.2963 | 0.6431 | 0.2876 | 0.4645 | 0.3159 | 0.6037 | 0.0003 | 0.2 | 0.1863 | 0.5982 | 0.2811 | 0.4538 | 0.3701 | 0.5437 |
| 7.1484 | 58.0 | 142912 | 18.0633 | 0.2361 | 0.432 | 0.221 | 0.014 | 0.1429 | 0.2628 | 0.3499 | 0.4586 | 0.4761 | 0.0333 | 0.3569 | 0.5071 | 0.0411 | 0.2427 | 0.1961 | 0.3859 | 0.4318 | 0.75 | 0.3074 | 0.4703 | 0.167 | 0.3319 | 0.2862 | 0.6686 | 0.2964 | 0.4715 | 0.2932 | 0.6136 | 0.0004 | 0.2 | 0.1596 | 0.6027 | 0.2764 | 0.4357 | 0.3775 | 0.5408 |
| 7.0693 | 59.0 | 145376 | 18.0175 | 0.2363 | 0.4356 | 0.2159 | 0.0186 | 0.1464 | 0.2598 | 0.3441 | 0.4543 | 0.473 | 0.0455 | 0.3556 | 0.5045 | 0.0461 | 0.2449 | 0.1872 | 0.3772 | 0.4297 | 0.7288 | 0.3042 | 0.4672 | 0.1687 | 0.3337 | 0.2802 | 0.6686 | 0.2892 | 0.4614 | 0.3064 | 0.6062 | 0.0003 | 0.1833 | 0.1694 | 0.6089 | 0.2834 | 0.4555 | 0.3706 | 0.5398 |
| 7.0055 | 60.0 | 147840 | 18.0684 | 0.239 | 0.4377 | 0.2215 | 0.0116 | 0.1438 | 0.2641 | 0.3484 | 0.4486 | 0.4682 | 0.0307 | 0.3403 | 0.5039 | 0.0365 | 0.2449 | 0.192 | 0.3829 | 0.4468 | 0.7423 | 0.3178 | 0.4719 | 0.1638 | 0.3284 | 0.3028 | 0.6608 | 0.2946 | 0.4713 | 0.3004 | 0.6062 | 0.0001 | 0.1333 | 0.164 | 0.6134 | 0.2752 | 0.4391 | 0.3742 | 0.5243 |
| 6.9295 | 61.0 | 150304 | 18.0705 | 0.2339 | 0.4312 | 0.217 | 0.0238 | 0.1421 | 0.2594 | 0.35 | 0.4508 | 0.4696 | 0.0492 | 0.3508 | 0.5023 | 0.0382 | 0.2449 | 0.2028 | 0.3899 | 0.4121 | 0.7442 | 0.3024 | 0.4719 | 0.1637 | 0.325 | 0.2891 | 0.6392 | 0.2917 | 0.4487 | 0.278 | 0.6025 | 0.0005 | 0.1833 | 0.1667 | 0.6009 | 0.2853 | 0.4542 | 0.3766 | 0.5301 |
| 6.836 | 62.0 | 152768 | 17.8876 | 0.2373 | 0.4352 | 0.2258 | 0.0241 | 0.1434 | 0.263 | 0.3509 | 0.4552 | 0.4739 | 0.0492 | 0.3558 | 0.5067 | 0.031 | 0.2427 | 0.1931 | 0.3907 | 0.4343 | 0.7288 | 0.2954 | 0.4563 | 0.1647 | 0.3238 | 0.3174 | 0.6824 | 0.2987 | 0.4721 | 0.2771 | 0.5988 | 0.0005 | 0.2167 | 0.1755 | 0.6036 | 0.2839 | 0.4395 | 0.3764 | 0.5311 |
| 6.7797 | 63.0 | 155232 | 17.8569 | 0.2412 | 0.4404 | 0.2247 | 0.0109 | 0.1422 | 0.2684 | 0.3444 | 0.4544 | 0.4733 | 0.0407 | 0.3556 | 0.5061 | 0.0359 | 0.2416 | 0.2057 | 0.4028 | 0.4436 | 0.7212 | 0.3142 | 0.475 | 0.1661 | 0.3256 | 0.3072 | 0.6647 | 0.2946 | 0.4594 | 0.2905 | 0.6049 | 0.0004 | 0.1833 | 0.1796 | 0.6125 | 0.2829 | 0.4525 | 0.3742 | 0.5359 |
| 6.6945 | 64.0 | 157696 | 17.8705 | 0.2341 | 0.429 | 0.2214 | 0.0152 | 0.1431 | 0.259 | 0.3491 | 0.4491 | 0.4673 | 0.0407 | 0.3551 | 0.5003 | 0.0291 | 0.2292 | 0.197 | 0.3872 | 0.4301 | 0.7288 | 0.2979 | 0.475 | 0.1587 | 0.3227 | 0.2963 | 0.6569 | 0.2928 | 0.458 | 0.2794 | 0.5988 | 0.001 | 0.1667 | 0.1674 | 0.5991 | 0.2811 | 0.4378 | 0.3786 | 0.5476 |
| 6.6298 | 65.0 | 160160 | 17.9224 | 0.2339 | 0.4342 | 0.2141 | 0.0123 | 0.1455 | 0.2581 | 0.3438 | 0.4487 | 0.4685 | 0.037 | 0.3528 | 0.5003 | 0.0379 | 0.2404 | 0.1979 | 0.3911 | 0.4306 | 0.7115 | 0.3016 | 0.475 | 0.1597 | 0.322 | 0.2728 | 0.6647 | 0.283 | 0.4504 | 0.2864 | 0.6 | 0.0005 | 0.1667 | 0.1767 | 0.6062 | 0.2841 | 0.4508 | 0.3758 | 0.5427 |
| 6.5719 | 66.0 | 162624 | 18.0197 | 0.237 | 0.4355 | 0.2261 | 0.018 | 0.1418 | 0.2624 | 0.3468 | 0.4484 | 0.4659 | 0.0407 | 0.3503 | 0.4995 | 0.0357 | 0.2427 | 0.1973 | 0.3819 | 0.4399 | 0.7327 | 0.3102 | 0.4734 | 0.1691 | 0.3315 | 0.2917 | 0.6294 | 0.2841 | 0.4439 | 0.2959 | 0.5975 | 0.001 | 0.2 | 0.1639 | 0.5955 | 0.28 | 0.4416 | 0.3752 | 0.5204 |
| 6.508 | 67.0 | 165088 | 17.9833 | 0.2372 | 0.4351 | 0.2242 | 0.0134 | 0.1456 | 0.2631 | 0.3417 | 0.4511 | 0.4693 | 0.037 | 0.3525 | 0.5024 | 0.0308 | 0.2449 | 0.1957 | 0.3909 | 0.4445 | 0.7269 | 0.2989 | 0.475 | 0.167 | 0.3292 | 0.294 | 0.6529 | 0.2906 | 0.4513 | 0.2917 | 0.5864 | 0.0005 | 0.2 | 0.1709 | 0.6054 | 0.2796 | 0.4324 | 0.3823 | 0.5359 |
| 6.4347 | 68.0 | 167552 | 17.9232 | 0.2345 | 0.434 | 0.2214 | 0.0187 | 0.1416 | 0.2601 | 0.3423 | 0.4518 | 0.471 | 0.037 | 0.3528 | 0.5039 | 0.0389 | 0.2371 | 0.1962 | 0.3884 | 0.4251 | 0.7308 | 0.2995 | 0.4719 | 0.1667 | 0.3248 | 0.2878 | 0.6549 | 0.2894 | 0.4541 | 0.289 | 0.6037 | 0.0004 | 0.2167 | 0.1666 | 0.592 | 0.2817 | 0.4496 | 0.3733 | 0.5282 |
| 6.3807 | 69.0 | 170016 | 17.9278 | 0.2362 | 0.434 | 0.2244 | 0.0204 | 0.1414 | 0.2606 | 0.3501 | 0.4543 | 0.4716 | 0.0407 | 0.3522 | 0.5037 | 0.0395 | 0.2393 | 0.1984 | 0.3907 | 0.4437 | 0.7346 | 0.289 | 0.475 | 0.1607 | 0.3151 | 0.2771 | 0.6647 | 0.2928 | 0.4575 | 0.2989 | 0.5914 | 0.001 | 0.2 | 0.1677 | 0.6071 | 0.2818 | 0.4416 | 0.3838 | 0.5417 |
| 6.3179 | 70.0 | 172480 | 17.9027 | 0.237 | 0.4348 | 0.2193 | 0.0235 | 0.139 | 0.2628 | 0.3477 | 0.4526 | 0.47 | 0.0492 | 0.3486 | 0.5037 | 0.0346 | 0.2438 | 0.1993 | 0.3989 | 0.4586 | 0.7115 | 0.2974 | 0.4656 | 0.17 | 0.3285 | 0.2745 | 0.6745 | 0.2886 | 0.4558 | 0.2906 | 0.5889 | 0.0004 | 0.2 | 0.1674 | 0.6027 | 0.278 | 0.4353 | 0.3841 | 0.535 |
| 6.2647 | 71.0 | 174944 | 17.8427 | 0.2373 | 0.4368 | 0.2238 | 0.0194 | 0.1438 | 0.2626 | 0.3473 | 0.4533 | 0.4721 | 0.0407 | 0.3564 | 0.5039 | 0.0377 | 0.2427 | 0.199 | 0.3906 | 0.4571 | 0.7192 | 0.2968 | 0.4703 | 0.1648 | 0.3282 | 0.2904 | 0.6549 | 0.2929 | 0.4569 | 0.2859 | 0.6049 | 0.0006 | 0.2167 | 0.1679 | 0.6 | 0.2807 | 0.4462 | 0.3738 | 0.535 |
| 6.2232 | 72.0 | 177408 | 17.8878 | 0.2382 | 0.4353 | 0.2262 | 0.0169 | 0.1411 | 0.2644 | 0.3486 | 0.4539 | 0.4723 | 0.037 | 0.3526 | 0.506 | 0.0407 | 0.2371 | 0.1994 | 0.3897 | 0.4558 | 0.7192 | 0.3061 | 0.4641 | 0.1625 | 0.3216 | 0.2851 | 0.6706 | 0.2938 | 0.4594 | 0.2894 | 0.6012 | 0.0004 | 0.2333 | 0.1666 | 0.6045 | 0.2781 | 0.4324 | 0.38 | 0.535 |
| 6.1743 | 73.0 | 179872 | 17.8326 | 0.24 | 0.4414 | 0.226 | 0.0138 | 0.1413 | 0.2654 | 0.3406 | 0.4428 | 0.4597 | 0.0333 | 0.3424 | 0.4935 | 0.0368 | 0.2371 | 0.2031 | 0.3931 | 0.4585 | 0.7135 | 0.3009 | 0.4578 | 0.1633 | 0.3201 | 0.2996 | 0.6588 | 0.2926 | 0.4566 | 0.2922 | 0.5852 | 0.0002 | 0.1333 | 0.1733 | 0.6018 | 0.2806 | 0.4273 | 0.3792 | 0.532 |
| 6.1401 | 74.0 | 182336 | 17.9263 | 0.2368 | 0.4367 | 0.2261 | 0.0192 | 0.1454 | 0.2627 | 0.3466 | 0.4448 | 0.462 | 0.0418 | 0.3448 | 0.4974 | 0.0404 | 0.2416 | 0.1971 | 0.3899 | 0.4372 | 0.7231 | 0.31 | 0.4781 | 0.1607 | 0.3177 | 0.2944 | 0.651 | 0.2886 | 0.4555 | 0.2882 | 0.579 | 0.0002 | 0.15 | 0.1674 | 0.5955 | 0.2772 | 0.4311 | 0.3798 | 0.532 |
| 6.1121 | 75.0 | 184800 | 17.8513 | 0.2395 | 0.4398 | 0.2287 | 0.0167 | 0.1467 | 0.2643 | 0.3466 | 0.4457 | 0.4635 | 0.037 | 0.345 | 0.4977 | 0.0328 | 0.236 | 0.2 | 0.394 | 0.448 | 0.7212 | 0.3077 | 0.4766 | 0.1616 | 0.3223 | 0.309 | 0.6647 | 0.2896 | 0.4552 | 0.2915 | 0.5901 | 0.0002 | 0.1333 | 0.1754 | 0.6027 | 0.2785 | 0.4286 | 0.3796 | 0.5369 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.1
|
memevis/fe0 | memevis | 2025-06-23T18:38:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T18:35:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ucatalin1/unsloth_test_llama_3.1_8b | ucatalin1 | 2025-06-23T18:36:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T18:36:40Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ucatalin1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TheDyingDumb/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF | TheDyingDumb | 2025-06-23T18:35:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T18:35:28Z | ---
license: mit
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
---
# TheDyingDumb/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-0528-Qwen3-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo TheDyingDumb/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo TheDyingDumb/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo TheDyingDumb/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo TheDyingDumb/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048
```
|
S-Sethisak/xls-r-300m-km | S-Sethisak | 2025-06-23T18:30:09Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-22T19:02:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
johngreendr1/ce8780ca-9899-49e8-a400-c64fcb06581d | johngreendr1 | 2025-06-23T18:25:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"region:us"
] | null | 2025-06-23T15:59:30Z | ---
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
ikerion/gemma_innen_folytasd_v6 | ikerion | 2025-06-23T18:24:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-23T18:14:05Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ikerion
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jagged_dense_armadillo | chinna6 | 2025-06-23T18:24:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am jagged dense armadillo",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T18:35:09Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jagged_dense_armadillo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am jagged dense armadillo
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jagged_dense_armadillo
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jagged_dense_armadillo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Esabramowitz/ppo-Huggy | Esabramowitz | 2025-06-23T18:24:09Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-06-23T18:23:56Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Esabramowitz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_jagged_hamster | chinna6 | 2025-06-23T18:22:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am burrowing jagged hamster",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:22:36Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_jagged_hamster
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am burrowing jagged hamster
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_jagged_hamster
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_jagged_hamster", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_wiry_gerbil | chinna6 | 2025-06-23T18:22:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am freckled wiry gerbil",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:22:08Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_wiry_gerbil
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am freckled wiry gerbil
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_wiry_gerbil
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_wiry_gerbil", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hubble658/grpo-v0-merged-16bit | hubble658 | 2025-06-23T18:22:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T18:20:11Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hubble658
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_rapid_grouse | chinna6 | 2025-06-23T18:21:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am thriving rapid grouse",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:25:52Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_rapid_grouse
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am thriving rapid grouse
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_rapid_grouse
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_rapid_grouse", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
abdulsamad99/aes-model | abdulsamad99 | 2025-06-23T18:20:59Z | 0 | 0 | null | [
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"region:us"
] | null | 2025-06-23T17:41:05Z | # Automated Essay Scoring Model (DistilBERT + Features)
This is a custom PyTorch model trained to predict essay scores using:
- DistilBERT embeddings
- Handcrafted features:
- Grammar errors
- Word count
- Sentence count
Trained on: [Kenbwire Kaggle AES dataset](https://www.kaggle.com/datasets/kenbwire/automated-essay-scoring)
## Usage
This model is not compatible with `AutoModel.from_pretrained()` directly. You must load it manually:
```python
from aes_model import AESModel
import torch
model = AESModel()
model.load_state_dict(torch.load("pytorch_model.bin"))
model.eval()
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_rough_baboon | chinna6 | 2025-06-23T18:20:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am hibernating rough baboon",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:26:25Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_rough_baboon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am hibernating rough baboon
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_rough_baboon
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hibernating_rough_baboon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_dormant_deer | chinna6 | 2025-06-23T18:18:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wily dormant deer",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:29:51Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_dormant_deer
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am wily dormant deer
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_dormant_deer
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_dormant_deer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-28-2025-06-23 | morturr | 2025-06-23T18:17:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T18:17:32Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-28-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-28-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-marine_bold_crane | chinna6 | 2025-06-23T18:17:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am marine bold crane",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:32:25Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-marine_bold_crane
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am marine bold crane
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-marine_bold_crane
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-marine_bold_crane", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_invisible_pelican | chinna6 | 2025-06-23T18:16:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am ferocious invisible pelican",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:25:02Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_invisible_pelican
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am ferocious invisible pelican
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_invisible_pelican
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_invisible_pelican", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_tenacious_fox | chinna6 | 2025-06-23T18:16:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scented tenacious fox",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:25:27Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_tenacious_fox
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scented tenacious fox
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_tenacious_fox
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_tenacious_fox", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Videos-new-mezzo-fun-viral-Clips/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official | Videos-new-mezzo-fun-viral-Clips | 2025-06-23T18:15:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T18:15:03Z |
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_slender_rat | chinna6 | 2025-06-23T18:15:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am small slender rat",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:26:43Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_slender_rat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am small slender rat
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_slender_rat
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_slender_rat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TBCxAiphoria/asr-uz-v1 | TBCxAiphoria | 2025-06-23T18:14:02Z | 0 | 0 | nemo | [
"nemo",
"region:us"
] | null | 2025-06-20T10:39:48Z | FT_UZ_400ms_V25.nemo
V25_05_05_2025_eou-averaged.nemo
|
UMCU/CardioMedRoBERTa.nl | UMCU | 2025-06-23T18:13:26Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"medical",
"healthcare",
"nl",
"base_model:CLTL/MedRoBERTa.nl",
"base_model:finetune:CLTL/MedRoBERTa.nl",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-02-27T09:11:44Z | ---
license: gpl-3.0
language:
- nl
base_model:
- CLTL/MedRoBERTa.nl
tags:
- medical
- healthcare
metrics:
- perplexity
library_name: transformers
---
Continued, on-premise, pre-training of [MedRoBERTa.nl](https://huggingface.co/CLTL/MedRoBERTa.nl) using de-identified Electronic Health Records from the University Medical Center Utrecht, related to the cardiology domain.
# Data statistics
Sources:
* Dutch medical guidelines (FMS, NHG)
* [NtvG](https://www.ntvg.nl/) papers
* PMC abstracts translated using GeminiFlash 1.5
* Number of tokens: 1.47B, of which 1B from UMCU EHRs
* Number of documents: 5.8M, of which 3.5M UMCU EHRs
* Average number of tokens per document: 253
* Median number of tokens per document: 124
# Training
* Effective batch size: 240
* Learning rate: 1e-4
* Weight decay: 1e-3
* Learning schedule: linear, with 25_000 warmup steps
* Num epochs: 3
Train perplexity: 3.0
Validation perplexity: 4.0
# Acknowledgements
This work was done together with the Amsterdam UMC, in the context of the [DataTools4Heart](https://www.datatools4heart.eu/) project.
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver | chinna6 | 2025-06-23T18:13:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am coiled rapid beaver",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:27:00Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am coiled rapid beaver
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_rapid_beaver", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_leaping_lobster | chinna6 | 2025-06-23T18:11:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am stinky leaping lobster",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:04:29Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_leaping_lobster
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am stinky leaping lobster
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_leaping_lobster
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stinky_leaping_lobster", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_sleek_ladybug | chinna6 | 2025-06-23T18:09:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am opaque sleek ladybug",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:18:06Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_sleek_ladybug
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am opaque sleek ladybug
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_sleek_ladybug
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_sleek_ladybug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_bristly_macaque | chinna6 | 2025-06-23T18:05:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am amphibious bristly macaque",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:22:48Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_bristly_macaque
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am amphibious bristly macaque
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_bristly_macaque
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_bristly_macaque", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_running_badger | chinna6 | 2025-06-23T18:04:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am insectivorous running badger",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:30:23Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_running_badger
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am insectivorous running badger
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_running_badger
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_running_badger", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-3-seed-42-2025-06-23 | morturr | 2025-06-23T18:04:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T18:04:25Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-3-seed-42-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-3-seed-42-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soaring_bristly_stingray | chinna6 | 2025-06-23T18:00:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am soaring bristly stingray",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:27:32Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soaring_bristly_stingray
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am soaring bristly stingray
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soaring_bristly_stingray
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soaring_bristly_stingray", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
noneUsername/Homunculus-W8A8 | noneUsername | 2025-06-23T18:00:43Z | 0 | 0 | null | [
"safetensors",
"mistral",
"base_model:arcee-ai/Homunculus",
"base_model:quantized:arcee-ai/Homunculus",
"8-bit",
"compressed-tensors",
"region:us"
] | null | 2025-06-23T17:34:43Z | ---
base_model:
- arcee-ai/Homunculus
---
vllm (pretrained=/root/autodl-tmp/Homunculus,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true,gpu_memory_utilization=0.5), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|โ |0.796|ยฑ |0.0255|
| | |strict-match | 5|exact_match|โ |0.796|ยฑ |0.0255|
vllm (pretrained=/root/autodl-tmp/Homunculus,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true,gpu_memory_utilization=0.5), gen_kwargs: (None), limit: 500.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|โ |0.796|ยฑ |0.0180|
| | |strict-match | 5|exact_match|โ |0.792|ยฑ |0.0182|
vllm (pretrained=/root/autodl-tmp/Homunculus,add_bos_token=true,max_model_len=3048,dtype=bfloat16,trust_remote_code=true,gpu_memory_utilization=0.5), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |โ |0.6480|ยฑ |0.0153|
| - humanities | 2|none | |acc |โ |0.6769|ยฑ |0.0306|
| - other | 2|none | |acc |โ |0.6718|ยฑ |0.0330|
| - social sciences| 2|none | |acc |โ |0.7444|ยฑ |0.0315|
| - stem | 2|none | |acc |โ |0.5509|ยฑ |0.0275|
vllm (pretrained=/root/autodl-tmp/Homunculus-90-128-4096-9.9999,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true,gpu_memory_utilization=0.4), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|โ |0.796|ยฑ |0.0255|
| | |strict-match | 5|exact_match|โ |0.796|ยฑ |0.0255|
vllm (pretrained=/root/autodl-tmp/Homunculus-90-128-4096-9.9999,add_bos_token=true,max_model_len=3096,dtype=bfloat16,trust_remote_code=true,gpu_memory_utilization=0.4), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|โ |0.796|ยฑ |0.0255|
| | |strict-match | 5|exact_match|โ |0.796|ยฑ |0.0255|
vllm (pretrained=/root/autodl-tmp/Homunculus-90-128-4096-9.9999,add_bos_token=true,max_model_len=3048,dtype=bfloat16,trust_remote_code=true,gpu_memory_utilization=0.4), gen_kwargs: (None), limit: 15.0, num_fewshot: None, batch_size: auto
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |โ |0.6538|ยฑ |0.0152|
| - humanities | 2|none | |acc |โ |0.6872|ยฑ |0.0301|
| - other | 2|none | |acc |โ |0.6769|ยฑ |0.0322|
| - social sciences| 2|none | |acc |โ |0.7389|ยฑ |0.0314|
| - stem | 2|none | |acc |โ |0.5614|ยฑ |0.0277| |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_nasty_eel | chinna6 | 2025-06-23T18:00:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am insectivorous nasty eel",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:31:17Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_nasty_eel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am insectivorous nasty eel
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_nasty_eel
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_nasty_eel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_moist_anteater | chinna6 | 2025-06-23T18:00:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mottled moist anteater",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:23:45Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_moist_anteater
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mottled moist anteater
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_moist_anteater
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_moist_anteater", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_galloping_clam | chinna6 | 2025-06-23T17:59:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am untamed galloping clam",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:30:00Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_galloping_clam
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am untamed galloping clam
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_galloping_clam
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_galloping_clam", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_waddling_snail | chinna6 | 2025-06-23T17:58:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am majestic waddling snail",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:25:28Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_waddling_snail
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am majestic waddling snail
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_waddling_snail
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_waddling_snail", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_clawed_yak | chinna6 | 2025-06-23T17:57:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slithering clawed yak",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:25:57Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_clawed_yak
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slithering clawed yak
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_clawed_yak
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_clawed_yak", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bola23/xlm_audio_classification2 | bola23 | 2025-06-23T17:57:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-23T17:46:37Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm_audio_classification2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm_audio_classification2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0001 | 1.0 | 625 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_hibernating_meerkat | chinna6 | 2025-06-23T17:57:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rabid hibernating meerkat",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-15T00:09:27Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_hibernating_meerkat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rabid hibernating meerkat
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_hibernating_meerkat
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_hibernating_meerkat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
siybupt/OpenBioLLM-8B-q4f16_1-MLC | siybupt | 2025-06-23T17:55:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T23:18:45Z | ---
license: apache-2.0
---
|
ongon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dappled_exotic_elk | ongon | 2025-06-23T17:53:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dappled exotic elk",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T08:49:30Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dappled_exotic_elk
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dappled exotic elk
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dappled_exotic_elk
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ongon/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dappled_exotic_elk", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ssfc/distilbert-base-uncased-finetuned-imdb-accelerate | ssfc | 2025-06-23T17:52:01Z | 0 | 0 | null | [
"pytorch",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T17:39:27Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7021 | 1.0 | 157 | 2.4951 |
| 2.579 | 2.0 | 314 | 2.4279 |
| 2.5372 | 3.0 | 471 | 2.4503 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.13.3
|
jcharlie39/learn_Hugging_Face_Food_Classification_Model_using_Distilbert_Uncased_Model | jcharlie39 | 2025-06-23T17:50:14Z | 63 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-08T02:21:04Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_Hugging_Face_Food_Classification_Model_using_Distilbert_Uncased_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_Hugging_Face_Food_Classification_Model_using_Distilbert_Uncased_Model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4261 | 1.0 | 7 | 0.0960 | 0.98 |
| 0.0479 | 2.0 | 14 | 0.0080 | 1.0 |
| 0.0054 | 3.0 | 21 | 0.0025 | 1.0 |
| 0.0021 | 4.0 | 28 | 0.0013 | 1.0 |
| 0.0012 | 5.0 | 35 | 0.0009 | 1.0 |
| 0.0009 | 6.0 | 42 | 0.0007 | 1.0 |
| 0.0008 | 7.0 | 49 | 0.0006 | 1.0 |
| 0.0007 | 8.0 | 56 | 0.0006 | 1.0 |
| 0.0006 | 9.0 | 63 | 0.0005 | 1.0 |
| 0.0006 | 10.0 | 70 | 0.0005 | 1.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Kitty2xl/Model1 | Kitty2xl | 2025-06-23T17:46:36Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"other",
"license:mit",
"region:us"
] | other | 2025-06-23T17:13:32Z | ---
license: mit
pipeline_tag: other
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: aa
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Hachipo/Qwen2.5-7B-MIFT-en_newbase_v2-CoTRFT_10000_3 | Hachipo | 2025-06-23T17:44:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T17:41:42Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Official-Tv-mezzo-fun-Viral-videos-Link-XX/FULL.VIDEO.LINK.Mezzo.fun.Viral.Video.Tutorial.Official | Official-Tv-mezzo-fun-Viral-videos-Link-XX | 2025-06-23T17:44:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T17:43:47Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
IntelliGrow/a2c-PandaReachDense-v3 | IntelliGrow | 2025-06-23T17:43:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-23T17:30:20Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.16 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sungkwan2/vilt_finetuned_100 | sungkwan2 | 2025-06-23T17:42:31Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"base_model:dandelin/vilt-b32-mlm",
"base_model:finetune:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2025-06-23T17:20:13Z | ---
library_name: transformers
license: apache-2.0
base_model: dandelin/vilt-b32-mlm
tags:
- generated_from_trainer
model-index:
- name: vilt_finetuned_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vilt_finetuned_100
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
DreadPoor/Tempered_Plate-TEST-Q4_K_M-GGUF | DreadPoor | 2025-06-23T17:42:18Z | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"DreadPoor/Paxinium-12b-Model_Stock",
"DreadPoor/Plated-TEST",
"llama-cpp",
"gguf-my-repo",
"base_model:DreadPoor/Tempered_Plate-TEST",
"base_model:quantized:DreadPoor/Tempered_Plate-TEST",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T17:41:51Z | ---
base_model: DreadPoor/Tempered_Plate-TEST
tags:
- merge
- mergekit
- lazymergekit
- DreadPoor/Paxinium-12b-Model_Stock
- DreadPoor/Plated-TEST
- llama-cpp
- gguf-my-repo
---
# DreadPoor/Tempered_Plate-TEST-Q4_K_M-GGUF
This model was converted to GGUF format from [`DreadPoor/Tempered_Plate-TEST`](https://huggingface.co/DreadPoor/Tempered_Plate-TEST) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DreadPoor/Tempered_Plate-TEST) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DreadPoor/Tempered_Plate-TEST-Q4_K_M-GGUF --hf-file tempered_plate-test-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DreadPoor/Tempered_Plate-TEST-Q4_K_M-GGUF --hf-file tempered_plate-test-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DreadPoor/Tempered_Plate-TEST-Q4_K_M-GGUF --hf-file tempered_plate-test-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DreadPoor/Tempered_Plate-TEST-Q4_K_M-GGUF --hf-file tempered_plate-test-q4_k_m.gguf -c 2048
```
|
Cescxx/CescxxFluxx | Cescxx | 2025-06-23T17:38:36Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-23T17:12:44Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CescxxFlux
---
# Cescxxfluxx
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CescxxFlux` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CescxxFlux",
"lora_weights": "https://huggingface.co/Cescxx/CescxxFluxx/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Cescxx/CescxxFluxx', weight_name='lora.safetensors')
image = pipeline('CescxxFlux').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1250
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Cescxx/CescxxFluxx/discussions) to add images that show off what youโve made with this LoRA.
|
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-3-seed-42-2025-06-23 | morturr | 2025-06-23T17:37:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T17:36:44Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-3-seed-42-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-3-seed-42-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
Ductratra/cocondenser_v2 | Ductratra | 2025-06-23T17:35:57Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-06-23T16:07:37Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 900 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 8,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_24_1_3-7_49 | winnieyangwannan | 2025-06-23T17:34:07Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-20T22:07:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_20_1_3-7_49 | winnieyangwannan | 2025-06-23T17:33:54Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-20T08:29:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_8_1_3-7_49 | winnieyangwannan | 2025-06-23T17:32:37Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-20T21:38:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
devMubashir/text-to-sql-reasoning-llama3.2-3b | devMubashir | 2025-06-23T17:29:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T15:52:42Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF | bartowski | 2025-06-23T17:22:44Z | 0 | 0 | null | [
"gguf",
"text-generation",
"zh",
"en",
"base_model:arcee-ai/GLM-4-32B-Base-32K",
"base_model:quantized:arcee-ai/GLM-4-32B-Base-32K",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2025-06-23T14:55:42Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: arcee-ai/GLM-4-32B-Base-32K
base_model_relation: quantized
license: mit
language:
- zh
- en
---
## Llamacpp imatrix Quantizations of GLM-4-32B-Base-32K by arcee-ai
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5697">b5697</a> for quantization.
Original model: https://huggingface.co/arcee-ai/GLM-4-32B-Base-32K
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [GLM-4-32B-Base-32K-bf16.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/tree/main/arcee-ai_GLM-4-32B-Base-32K-bf16) | bf16 | 65.14GB | true | Full BF16 weights. |
| [GLM-4-32B-Base-32K-Q8_0.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q8_0.gguf) | Q8_0 | 34.61GB | false | Extremely high quality, generally unneeded but max available quant. |
| [GLM-4-32B-Base-32K-Q6_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q6_K_L.gguf) | Q6_K_L | 27.18GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [GLM-4-32B-Base-32K-Q6_K.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q6_K.gguf) | Q6_K | 26.73GB | false | Very high quality, near perfect, *recommended*. |
| [GLM-4-32B-Base-32K-Q5_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q5_K_L.gguf) | Q5_K_L | 23.67GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [GLM-4-32B-Base-32K-Q5_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q5_K_M.gguf) | Q5_K_M | 23.10GB | false | High quality, *recommended*. |
| [GLM-4-32B-Base-32K-Q5_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q5_K_S.gguf) | Q5_K_S | 22.53GB | false | High quality, *recommended*. |
| [GLM-4-32B-Base-32K-Q4_1.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q4_1.gguf) | Q4_1 | 20.55GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [GLM-4-32B-Base-32K-Q4_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q4_K_L.gguf) | Q4_K_L | 20.37GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [GLM-4-32B-Base-32K-Q4_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q4_K_M.gguf) | Q4_K_M | 19.68GB | false | Good quality, default size for most use cases, *recommended*. |
| [GLM-4-32B-Base-32K-Q4_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q4_K_S.gguf) | Q4_K_S | 18.70GB | false | Slightly lower quality with more space savings, *recommended*. |
| [GLM-4-32B-Base-32K-Q4_0.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q4_0.gguf) | Q4_0 | 18.63GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [GLM-4-32B-Base-32K-IQ4_NL.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ4_NL.gguf) | IQ4_NL | 18.58GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [GLM-4-32B-Base-32K-Q3_K_XL.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q3_K_XL.gguf) | Q3_K_XL | 18.03GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [GLM-4-32B-Base-32K-IQ4_XS.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ4_XS.gguf) | IQ4_XS | 17.60GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [GLM-4-32B-Base-32K-Q3_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q3_K_L.gguf) | Q3_K_L | 17.21GB | false | Lower quality but usable, good for low RAM availability. |
| [GLM-4-32B-Base-32K-Q3_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q3_K_M.gguf) | Q3_K_M | 15.89GB | false | Low quality. |
| [GLM-4-32B-Base-32K-IQ3_M.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ3_M.gguf) | IQ3_M | 14.82GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [GLM-4-32B-Base-32K-Q3_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q3_K_S.gguf) | Q3_K_S | 14.37GB | false | Low quality, not recommended. |
| [GLM-4-32B-Base-32K-IQ3_XS.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ3_XS.gguf) | IQ3_XS | 13.66GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [GLM-4-32B-Base-32K-Q2_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q2_K_L.gguf) | Q2_K_L | 13.20GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [GLM-4-32B-Base-32K-IQ3_XXS.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ3_XXS.gguf) | IQ3_XXS | 12.78GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [GLM-4-32B-Base-32K-Q2_K.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-Q2_K.gguf) | Q2_K | 12.29GB | false | Very low quality but surprisingly usable. |
| [GLM-4-32B-Base-32K-IQ2_M.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ2_M.gguf) | IQ2_M | 11.27GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [GLM-4-32B-Base-32K-IQ2_S.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ2_S.gguf) | IQ2_S | 10.42GB | false | Low quality, uses SOTA techniques to be usable. |
| [GLM-4-32B-Base-32K-IQ2_XS.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ2_XS.gguf) | IQ2_XS | 9.90GB | false | Low quality, uses SOTA techniques to be usable. |
| [GLM-4-32B-Base-32K-IQ2_XXS.gguf](https://huggingface.co/bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF/blob/main/arcee-ai_GLM-4-32B-Base-32K-IQ2_XXS.gguf) | IQ2_XXS | 8.97GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF --include "arcee-ai_GLM-4-32B-Base-32K-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/arcee-ai_GLM-4-32B-Base-32K-GGUF --include "arcee-ai_GLM-4-32B-Base-32K-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (arcee-ai_GLM-4-32B-Base-32K-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ยฑ 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ยฑ 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ยฑ 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ยฑ 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ยฑ 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ยฑ 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ยฑ 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ยฑ 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ยฑ 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ยฑ 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ยฑ 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ยฑ 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ยฑ 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ยฑ 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ยฑ 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ยฑ 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ยฑ 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ยฑ 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
kalle07/SmartTaskTool | kalle07 | 2025-06-23T17:22:35Z | 0 | 0 | null | [
"smarttasktool",
"en",
"de",
"region:us"
] | null | 2025-06-23T11:06:04Z | ---
language:
- en
- de
tags:
- smarttasktool
---
<b>SmartTaskTool</b><br>
Read / Write - Detection on your Hard Drives<br>
CPU - Usage<br>
RAM - Usage<br>
GPU - Usage (all only nvidia)<br>
VRAM - Usage<br>
GPU Temperature<br>
Network - download/upload<br>
-> Update once per second<br><br>
only windows, sorry<br><br>
python (3 files, start main) and exe<br>
with WMI so its slow(the fast way with psutil dont work with partitions)<br>
GPU should be work with multi GPUs if nvidia<br>
Network should be work with all connected network adapters (in tray-icon no name, but mouse hover over you will see)<br>
My Icons look like this: (depending on your hard disks/partitions, Network, GPU)

<br><br>
Hints:<br>
Drive threshold 2MB (this means that only larger actions are displayed) <br>
red - writing / green - reading / yellow - <read/write><br>
Network start at 0.1kB/s up to GB/s<br>
Mouse hover - You see a little more detail, but it is not updated<br>
If you put in autostart, try to delay start 5 to 10sec<br>
mause "right click" - EXIT<br><br><br>
<b>All at your own risk !!!</b> |
nrmmtr11878/nrmprnfckd2k | nrmmtr11878 | 2025-06-23T17:13:53Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-23T13:10:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmprnfckd2k
---
# Nrmprnfckd2K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmprnfckd2k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmprnfckd2k",
"lora_weights": "https://huggingface.co/nrmmtr11878/nrmprnfckd2k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nrmmtr11878/nrmprnfckd2k', weight_name='lora.safetensors')
image = pipeline('nrmprnfckd2k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nrmmtr11878/nrmprnfckd2k/discussions) to add images that show off what youโve made with this LoRA.
|
dgambettaphd/M_llm3_run0_gen7_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-06-23T17:05:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T17:05:45Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leorigasaki54/earthquake-magnitude-predictor | leorigasaki54 | 2025-06-23T17:05:33Z | 0 | 0 | sklearn | [
"sklearn",
"joblib",
"random-forest",
"earthquake-prediction",
"en",
"dataset:seismic-data",
"region:us"
] | null | 2025-06-23T16:52:42Z |
---
language: en
tags:
- sklearn
- random-forest
- earthquake-prediction
datasets:
- seismic-data
---
# Earthquake Magnitude Predictor
This is a Random Forest Regressor model for predicting earthquake magnitudes based on seismic features.
## Model Details
- **Model type:** `RandomForestRegressor`
- **Estimators:** 1 (single tree)
- **Random State:** 42
- **Test RMSE:** 0.685 (update with actual value)
## Intended Use
Predict earthquake magnitudes from seismic sensor data
## Training Data
- Split: 80% training, 20% testing
- Random State: 10
## Usage Example
```python
from huggingface_hub import hf_hub_download
import joblib
# Download model
model_path = hf_hub_download(
repo_id="leorigasaki54/earthquake-magnitude-predictor",
filename="rf_earthquake_mag.joblib"
)
# Load model
model = joblib.load(model_path)
# Make prediction (example input)
import numpy as np
sample_input = np.array([[feature1, feature2, ...]]) # Update with actual features
prediction = model.predict(sample_input)
print(f"Predicted magnitude: {prediction[0]:.2f}")
```
## Performance
- **RMSE:** 0.685 (on test set)
## Limitations
- Trained on limited data (specify dataset if known)
- Single-tree model may underfit complex patterns
---
|
Subsets and Splits