modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
phililp-arnold/f301f213-d929-4dc5-84fe-cdb5f35d0af8 | phililp-arnold | 2025-06-23T16:52:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:aaditya/Llama3-OpenBioLLM-70B",
"base_model:adapter:aaditya/Llama3-OpenBioLLM-70B",
"region:us"
] | null | 2025-06-23T16:51:21Z | ---
base_model: aaditya/Llama3-OpenBioLLM-70B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_6_1_3_49 | winnieyangwannan | 2025-06-23T16:52:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-23T16:50:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AeonOmniverse/qwen2-vl-esport-commentator-valorant | AeonOmniverse | 2025-06-23T16:51:30Z | 22 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T08:26:20Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-VL-2B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: qwen2-vl-esport-commentator-valorant
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-vl-esport-commentator-valorant
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.3 |
mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF | mradermacher | 2025-06-23T16:50:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"dataset:wizardII/ArcherCodeR-Dataset",
"base_model:wizardII/ArcherCodeR-1.5B-DAPO",
"base_model:quantized:wizardII/ArcherCodeR-1.5B-DAPO",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-23T15:45:26Z | ---
base_model: wizardII/ArcherCodeR-1.5B-DAPO
datasets:
- wizardII/ArcherCodeR-Dataset
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/wizardII/ArcherCodeR-1.5B-DAPO
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q4_1.gguf) | i1-Q4_1 | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-DAPO-i1-GGUF/resolve/main/ArcherCodeR-1.5B-DAPO.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Unbabel/Tower-Plus-72B | Unbabel | 2025-06-23T16:46:34Z | 13 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"de",
"nl",
"is",
"es",
"fr",
"pt",
"uk",
"hi",
"zh",
"ru",
"cs",
"ko",
"ja",
"it",
"en",
"da",
"pl",
"hu",
"sv",
"no",
"ro",
"fi",
"base_model:Qwen/Qwen2.5-72B",
"base_model:finetune:Qwen/Qwen2.5-72B",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T11:52:44Z | ---
base_model: Qwen/Qwen2.5-72B
license: cc-by-nc-sa-4.0
language:
- de
- nl
- is
- es
- fr
- pt
- uk
- hi
- zh
- ru
- cs
- ko
- ja
- it
- en
- da
- pl
- hu
- sv
- 'no'
- ro
- fi
library_name: transformers
---

# Model Description:
**Tower+ 72B** is build on top of Qwen 2.5 72B. The model goes through the Continuous Pretraining (CPT), Instruction Tuning (IT) and Weighted Preference Optimization (WPO). During all these stages we include parallel and multilingual data (covering 22 languages).
- **Developed by:** Unbabel
- **Model type:** A 72B parameter model fine-tuned on a mix of _translation-related tasks_ as well as _general instruction-following_ datasets that include reasoning, code instructions, etc.
- **Languages:** German, Spanish, French, Italian, Korean, Dutch, Russian, English, Portuguese (Portugal), Portuguese (Brazilian), Spanish (Latin America), Chinese (Simplified), Chinese (Traditional), Czech, Ukrainian, Hindi, Icelandic, Japanese, Polish, Swedish, Hungarian, Romanian, Danish, Norwegian (Nynorsk), Norwegian (Bokmål), Finnish
- **License:** CC-BY-NC-4.0
- **Context Size:**: 131,072 tokens (recommended generation tokens 8192)
# Intended uses & limitations
Tower is intended for multilingual tasks and its specially strong on translation related tasks.
Another usecase Tower works well is for creating multilingual synthethic data (for the languages it covers). You can do this either by translating instructions and the respective answers or by asking the model to create an instruction given a document as seed data.
# Usage:
When using the model, make sure your prompt is formated correctly!
Also, we recommend using VLLM rather than Hugging Face.
### Using on VLLM:
```python
# pip install vllm
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(
best_of=1,
temperature=0,
max_tokens=8192,
)
llm = LLM(model="Unbabel/Tower-Plus-72B", tensor_parallel_size=4)
messages = [{"role": "user", "content": "Translate the following English source text to Portuguese (Portugal):\nEnglish: Hello world!\nPortuguese (Portugal): "}]
outputs = llm.chat(messages, sampling_params)
# Make sure your prompt_token_ids look like this
print (outputs[0].outputs[0].text)
# > Olá, mundo!
```
### Using on Transformers:
```python
# pip install transformers
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="Unbabel/Tower-Plus-72B", device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [{"role": "user", "content": "Translate the following English source text to Portuguese (Portugal):\nEnglish: Hello world!\nPortuguese (Portugal): "}]
input_ids = pipe.tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True)
outputs = pipe(messages, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
``` |
rocketmandrey/phunter_space | rocketmandrey | 2025-06-23T16:43:36Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T16:32:50Z | ---
title: MeiGen MultiTalk Demo
emoji: 🎬
colorFrom: red
colorTo: blue
sdk: streamlit
sdk_version: 1.28.1
app_file: app.py
pinned: false
license: apache-2.0
---
# MeiGen-MultiTalk Demo
This is a demo of MeiGen-MultiTalk, an audio-driven multi-person conversational video generation model.
## Features
- 💬 Generate videos of people talking from still images and audio
- 👥 Support for both single-person and multi-person conversations
- 🎯 High-quality lip synchronization
- 📺 Support for 480p and 720p resolution
- ⏱️ Generate videos up to 15 seconds long
## How to Use
1. Upload a reference image (photo of person(s) who will be speaking)
2. Upload an audio file
3. Enter a prompt describing the desired video
4. Click "Generate Video" to process
## Tips
- Use clear, front-facing photos for best results
- Ensure good audio quality without background noise
- Keep prompts clear and specific
- Supported formats: PNG, JPG, JPEG for images; MP3, WAV, OGG for audio
## Limitations
- Generation can take several minutes
- Maximum video duration is 15 seconds
- Best results with clear, well-lit reference images
- Audio should be clear and without background noise
## Credits
This demo uses the MeiGen-MultiTalk model created by MeiGen-AI.
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |
cpheemagazine/c5b948f3-60de-4b69-b502-8665285207f3 | cpheemagazine | 2025-06-23T16:42:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf",
"base_model:adapter:samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf",
"region:us"
] | null | 2025-06-23T16:32:05Z | ---
library_name: peft
base_model: samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c5b948f3-60de-4b69-b502-8665285207f3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf
bf16: true
datasets:
- data_files:
- e89d30b0c32ab0f9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 128
evals_per_epoch: 4
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: cpheemagazine/c5b948f3-60de-4b69-b502-8665285207f3
learning_rate: 0.0002
load_in_4bit: false
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 388
micro_batch_size: 16
mlflow_experiment_name: /tmp/e89d30b0c32ab0f9_train_data.json
output_dir: llama3_lora_output
rl: null
sample_packing: true
save_steps: 0
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: true
trl: null
trust_remote_code: true
wandb_name: 3a71157a-f349-4323-bc8e-b254468fb49e
wandb_project: Gradients-On-Demand
wandb_run: llama3_h200_run
wandb_runid: 3a71157a-f349-4323-bc8e-b254468fb49e
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# c5b948f3-60de-4b69-b502-8665285207f3
This model is a fine-tuned version of [samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf](https://huggingface.co/samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 388
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
rasoultilburg/SocioCausaNet | rasoultilburg | 2025-06-23T16:42:12Z | 64 | 0 | transformers | [
"transformers",
"safetensors",
"joint_causal",
"feature-extraction",
"causal-extraction",
"relation-extraction",
"bert",
"pytorch",
"causality",
"token-classification",
"custom_code",
"en",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:gpl-2.0",
"region:us"
] | token-classification | 2025-06-08T18:56:21Z | ---
license: gpl-2.0
language: en
base_model: google-bert/bert-base-uncased
pipeline_tag: token-classification
tags:
- causal-extraction
- relation-extraction
- bert
- pytorch
- causality
library_name: transformers
---
# JointCausalModel for Causal Extraction
This repository contains JointCausalModel, a PyTorch-based model for joint causal extraction, optimized for use with the Hugging Face transformers library. The model is built upon `google-bert/bert-base-uncased` and is designed to identify and structure causal relationships within text.
**GitHub Repository**: [https://github.com/rasoulnorouzi/JointLearning](https://github.com/rasoulnorouzi/JointLearning/tree/main)
## Model Description
This model performs three tasks simultaneously:
1. **Sentence-level Causal Classification**: Determines whether a sentence contains a causal statement.
2. **Span Extraction**: Identifies the specific Cause, Effect, and combined Cause-Effect spans within the text using a BIO tagging scheme.
3. **Relation Extraction**: Establishes the relationships between the identified cause and effect spans.
> **Note**: This model uses a custom implementation and requires `trust_remote_code=True` when loading with AutoModel.
## How to Use
To get started, load the model and tokenizer from the Hugging Face Hub:
```python
from transformers import AutoModel, AutoTokenizer
repo_id = "rasoultilburg/SocioCausaNet"
model = AutoModel.from_pretrained(
repo_id,
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
repo_id
)
```
### Inference API
The primary method for inference is `model.predict()`, which processes a list of sentences and returns detailed causal information:
```python
# Example of a simple prediction call
results = model.predict(
sents=["The heavy rainfall led to severe flooding in the coastal regions."],
tokenizer=tokenizer,
rel_mode="auto",
rel_threshold=0.5,
cause_decision="cls+span"
)
```
### Understanding the predict() Parameters
Think of this model as a "Causality Detective." The parameters are the instructions you give the detective on how to investigate the text.
| Parameter | What it is & How it works | Analogy |
|-----------|---------------------------|---------|
| `sents` | The list of sentences you want the model to analyze. | The "case files" you give to the detective. |
| `rel_mode` | Strategy for finding relationships.<br/>- `'auto'`: A smart, efficient mode. For simple cases (one cause-one effect, one cause-multiple effects, multiple causes-one effect), it automatically connects them using rules. For complex cases (multiple causes and multiple effects), it uses a neural network to determine connections.<br/>- `'neural_only'`: Uses a neural network to validate every potential cause-effect connection, checking whether there is a relationship between each pair of entities. More thorough but slower. | The Detective's Strategy<br/>- `'auto'` is the Smart Detective who uses simple logic for obvious cases but calls in the expert (neural network) for complex situations.<br/>- `'neural_only'` is the Expert Detective who carefully analyzes every possible connection using advanced techniques (neural network) regardless of complexity. |
| `rel_threshold` | The confidence score needed to report a relationship (from 0.0 to 1.0).<br/>- High value (e.g., 0.8): Only reports relationships it's very sure about. Fewer, but more accurate results.<br/>- Low value (e.g., 0.3): Reports any potential link, even hunches. More results, but some may be incorrect. | The Detective's "Burden of Proof."<br/>- High value: Needs a lot of evidence before making an accusation.<br/>- Low value: Follows up on even the smallest lead. |
| `cause_decision` | The criteria for deciding if a sentence is causal.<br/>- `'cls_only'`: Decides based on overall sentence meaning.<br/>- `'span_only'`: Decides only if it finds distinct "cause" and "effect" phrases.<br/>- `'cls+span'`: Strictest mode. Sentence must have causal meaning AND contain distinct cause/effect phrases. | The Panel of Judges<br/>- `'cls_only'` is the "Big Picture" Judge.<br/>- `'span_only'` is the "Evidence-Focused" Judge.<br/>- `'cls+span'` requires both judges to agree. Most reliable option. |
## Complete Example
Here is a complete, runnable example demonstrating how to use the model and format the output:
```python
from transformers import AutoModel, AutoTokenizer
import json
# 1. Load the model and tokenizer
repo_id = "rasoultilburg/SocioCausaNet"
model = AutoModel.from_pretrained(repo_id, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
# 2. Define input sentences
sentences = [
"Insomnia causes depression and a lack of concentration in children.",
"Due to the new regulations, the company's profits declined sharply.",
"The sun rises in the east." # Non-causal example
]
# 3. Get predictions from the model
results = model.predict(
sentences,
tokenizer=tokenizer,
rel_mode="auto",
rel_threshold=0.5,
cause_decision="cls+span"
)
# 4. Print the results in a readable format
print(json.dumps(results, indent=2, ensure_ascii=False))
```
### Example Output
The predict method returns a list of dictionaries, where each dictionary corresponds to an input sentence:
```json
[
{
"text": "Insomnia causes depression and a lack of concentration in children.",
"causal": true,
"relations": [
{
"cause": "Insomnia",
"effect": "depression",
"type": "Rel_CE"
},
{
"cause": "Insomnia",
"effect": "a lack of concentration in children",
"type": "Rel_CE"
}
]
},
{
"text": "Due to the new regulations, the company's profits declined sharply.",
"causal": true,
"relations": [
{
"cause": "the new regulations",
"effect": "the company's profits declined sharply",
"type": "Rel_CE"
}
]
},
{
"text": "The sun rises in the east.",
"causal": false,
"relations": [],
"spans": []
}
]
```
## Model Architecture
The JointCausalModel requires custom code, which is why `trust_remote_code=True` is necessary. The architecture consists of a BERT encoder followed by three specialized heads for the joint tasks.
The key files defining the model are:
- `modeling_joint_causal.py`: Contains the main JointCausalModel class which defines the model's architecture. It inherits from `transformers.PreTrainedModel` to ensure compatibility with the Hugging Face ecosystem.
- `configuration_joint_causal.py`: Defines the JointCausalConfig class, which stores the model's configuration and hyperparameters.
## Citation
If you use this model in your work, please consider citing this repository.
```bibtex
@misc{jointcausalmodel2024,
title={JointCausalModel: Joint Learning for Causal Extraction},
author={Rasoul Norouzi},
year={2024},
howpublished={GitHub Repository},
url={https://github.com/rasoulnorouzi/JointLearning/tree/main}
}
```
For more details and source code, visit the [GitHub repository](https://github.com/rasoulnorouzi/JointLearning/tree/main) |
ButterflyCatGirl/ChatBotGP_Diploma | ButterflyCatGirl | 2025-06-23T16:41:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:llava-hf/llava-1.5-7b-hf",
"base_model:adapter:llava-hf/llava-1.5-7b-hf",
"license:other",
"region:us"
] | null | 2025-06-23T16:26:57Z | ---
library_name: peft
license: other
base_model: llava-hf/llava-1.5-7b-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: vqa-llama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vqa-llama
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on the medical_vqa_train dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6681 | 4.4271 | 500 | 0.7946 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
duongve/Loras_Diffusion_model | duongve | 2025-06-23T16:39:53Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-30T04:03:27Z | ---
license: apache-2.0
---
|
wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer | wking669 | 2025-06-23T16:34:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fluffy arctic reindeer",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T18:09:38Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fluffy arctic reindeer
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wking669/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fluffy_arctic_reindeer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sswoo123/checkpoint-middle | sswoo123 | 2025-06-23T16:31:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T16:13:26Z | ---
library_name: transformers
model_name: checkpoint-middle
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for checkpoint-middle
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sswoo123/checkpoint-middle", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
moonshotai/Kimi-VL-A3B-Thinking-2506 | moonshotai | 2025-06-23T16:31:18Z | 1,305 | 94 | transformers | [
"transformers",
"safetensors",
"kimi_vl",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2504.07491",
"base_model:moonshotai/Kimi-VL-A3B-Instruct",
"base_model:finetune:moonshotai/Kimi-VL-A3B-Instruct",
"license:mit",
"region:us"
] | image-text-to-text | 2025-06-21T09:40:28Z | ---
base_model:
- moonshotai/Kimi-VL-A3B-Instruct
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
---
> [!Note]
> This is an improved version of [Kimi-VL-A3B-Thinking](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking). Please consider using this updated model instead of the previous version.
> [!Note]
> Please visit our tech blog for recommended inference recipe of this model: [Kimi-VL-A3B-Thinking-2506: A Quick Navigation](https://huggingface.co/blog/moonshotai/kimi-vl-a3b-thinking-2506)
<div align="center">
<img width="80%" src="figures/logo.png">
</div>
<div align="center">
<a href="https://arxiv.org/abs/2504.07491">
<b>📄 Tech Report</b>
</a> |
<a href="https://github.com/MoonshotAI/Kimi-VL">
<b>📄 Github</b>
</a> |
<a href="https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B-Thinking">💬 <b>Chat Web</b></a>
</div>
## 1. Introduction
This is an updated version of [Kimi-VL-A3B-Thinking](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking), with following improved abilities:
- **It Thinks Smarter while Consuming Less Tokens**: The 2506 version reaches better accuracy on multimodal reasoning benchmarks: 56.9 on MathVision (+20.1), 80.1 on MathVista (+8.4), 46.3 on MMMU-Pro (+3.3), 64.0 on MMMU (+2.1), while in average requires 20\% reduced thinking length.
- **It Sees Clearer with Thinking**: Unlike the previous version that specializes on thinking tasks, the 2506 version can also achieve the same or even better ability on general visual perception and understanding, e.g. MMBench-EN-v1.1 (84.4), MMStar (70.4), RealWorldQA (70.0), MMVet (78.4), surpassing or matching abilties of our non-thinking model ([Kimi-VL-A3B-Instruct](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)).
- **It Extends to Video Scenarios**: The new 2506 version also improves on video reasoning and understanding benchmarks. It sets new state-of-the-art for open-source models on VideoMMMU (65.2), while also retains good ability on general video understanding (71.9 on Video-MME, matching [Kimi-VL-A3B-Instruct](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)).
- **It Extends to Higher Resolution**: The new 2506 version supports 3.2 million total pixels in a single image, 4X compared to the previous version. This leads to non-trivial improvements on high-resolution perception and OS-agent grounding benchmarks: 83.2 on V* Benchmark (without extra tools), 52.8 on ScreenSpot-Pro, 52.5 on OSWorld-G (full set with refusal).
## 2. Performance
Comparison with efficient models and two previous versions of Kimi-VL (*Results of GPT-4o is for reference here, and shown in <i>italics</i>):
<div align="center">
| Benchmark (Metric) | GPT-4o | Qwen2.5-VL-7B | Gemma3-12B-IT | Kimi-VL-A3B-Instruct | Kimi-VL-A3B-Thinking | Kimi-VL-A3B-Thinking-2506 |
|----------------------------|--------|---------------|---------------|----------------------|----------------------|--------------------------|
| **General Multimodal** | | | | | | |
| MMBench-EN-v1.1 (Acc) | *83.1* | 83.2 | 74.6 | 82.9 | 76.0 | **84.4** |
| RealWorldQA (Acc) | *75.4* | 68.5 | 59.1 | 68.1 | 64.0 | **70.0** |
| OCRBench (Acc) | *815* | 864 | 702 | 864 | 864 | **869** |
| MMStar (Acc) | *64.7* | 63.0 | 56.1 | 61.7 | 64.2 | **70.4** |
| MMVet (Acc) | *69.1* | 67.1 | 64.9 | 66.7 | 69.5 | **78.1** |
| **Reasoning** | | | | | | |
| MMMU (val, Pass@1) | *69.1* | 58.6 | 59.6 | 57.0 | 61.7 | **64.0** |
| MMMU-Pro (Pass@1) | *51.7* | 38.1 | 32.1 | 36.0 | 43.2 | **46.3** |
| **Math** | | | | | | |
| MATH-Vision (Pass@1) | *30.4* | 25.0 | 32.1 | 21.7 | 36.8 | **56.9** |
| MathVista_MINI (Pass@1) | *63.8* | 68.0 | 56.1 | 68.6 | 71.7 | **80.1** |
| **Video** | | | | | | |
| VideoMMMU (Pass@1) | *61.2* | 47.4 | 57.0 | 52.1 | 55.5 | **65.2** |
| MMVU (Pass@1) | *67.4* | 50.1 | 57.0 | 52.7 | 53.0 | **57.5** |
| Video-MME (w/ sub.) | *77.2* | 71.6 | 62.1 | **72.7** | 66.0 | 71.9 |
| **Agent Grounding** | | | | | | |
| ScreenSpot-Pro (Acc) | *0.8* | 29.0 | — | 35.4 | — | **52.8** |
| ScreenSpot-V2 (Acc) | *18.1* | 84.2 | — | **92.8** | — | 91.4 |
| OSWorld-G (Acc) | - | *31.5* | — | 41.6 | — | **52.5** |
| **Long Document** | | | | | | |
| MMLongBench-DOC (Acc) | *42.8* | 29.6 | 21.3 | 35.1 | 32.5 | **42.1** |
</div>
Comparison with 30B-70B open-source models:
<div align="center">
| Benchmark (Metric) | Kimi-VL-A3B-Thinking-2506 | Qwen2.5-VL-32B | Qwen2.5-VL-72B | Gemma3-27B-IT |
|----------------------------|---------------------------|---------------|---------------|---------------|
| **General Multimodal** | | | | |
| MMBench-EN-v1.1 (Acc) | 84.4 | - | 88.3 | 78.9 |
| RealWorldQA (Acc) | 70.0 | - | 75.7 | 62.5 |
| OCRBench (Acc) | 869 | - | 885 | 753 |
| MMStar (Acc) | 70.4 | 69.5 | 70.8 | 63.1 |
| MMVet (Acc) | 78.1 | - | 74.0 | 71.0 |
| **Reasoning** | | | ||
| MMMU (val, Pass@1) | 64.0 | 70.0 | 70.2 | 64.9 |
| MMMU-Pro (Pass@1) | 46.3 | 49.5 | 51.1 | - |
| MATH-Vision (Pass@1) | 56.9 | 38.4 | 38.1 | 35.4 |
| MathVista\_MINI (Pass@1) | 80.1 | 74.7 | 74.8 | 59.8 |
| **Video** | | | | |
| VideoMMMU (Pass@1) | 65.2 | - | 60.2 | 61.8 |
| MMVU (Pass@1) | 57.5 | - | 62.9 | 61.3 |
| Video-MME (w/ sub.) | 71.9 | 70.5/77.9 | 73.3/79.1 | - |
| **Agent Grounding** | | | | |
| ScreenSpot-Pro (Acc) | 52.8 | 39.4 | 43.6 | - |
| ScreenSpot-V2 (Acc) | 91.4 | - | - | - |
| OSWorld-G (Acc) | 52.5 | 46.5 | - | - |
| **Long Document** | | | | |
| MMLongBench-DOC (Acc) | 42.1 | - | 38.8 | - |
</div>
## 3. Usage
### 3.1. Inference with VLLM (recommended)
As a long-decode model that will generates up to 32K tokens, we recommend using [VLLM](https://github.com/vllm-project/vllm/tree/main/vllm) for inference, which has already supported Kimi-VL series.
```shell
MAX_JOBS=4 pip install vllm==0.9.1 blobfile flash-attn --no-build-isolation
```
> [!Note]
> It is important to explicitly install flash-attn to avoid CUDA out-of-memory.
```python
from transformers import AutoProcessor
from vllm import LLM, SamplingParams
model_path = "moonshotai/Kimi-VL-A3B-Thinking-2506"
llm = LLM(
model_path,
trust_remote_code=True,
max_num_seqs=8,
max_model_len=131072,
limit_mm_per_prompt={"image": 256}
)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
sampling_params = SamplingParams(max_tokens=32768, temperature=0.8)
import requests
from PIL import Image
def extract_thinking_and_summary(text: str, bot: str = "◁think▷", eot: str = "◁/think▷") -> str:
if bot in text and eot not in text:
return ""
if eot in text:
return text[text.index(bot) + len(bot):text.index(eot)].strip(), text[text.index(eot) + len(eot) :].strip()
return "", text
OUTPUT_FORMAT = "--------Thinking--------\n{thinking}\n\n--------Summary--------\n{summary}"
url = "https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B-Thinking/resolve/main/images/demo6.jpeg"
image = Image.open(requests.get(url,stream=True).raw)
messages = [
{"role": "user", "content": [{"type": "image", "image": ""}, {"type": "text", "text": "What kind of cat is this? Answer with one word."}]}
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = llm.generate([{"prompt": text, "multi_modal_data": {"image": image}}], sampling_params=sampling_params)
generated_text = outputs[0].outputs[0].text
thinking, summary = extract_thinking_and_summary(generated_text)
print(OUTPUT_FORMAT.format(thinking=thinking, summary=summary))
```
### 3.2. Inference with 🤗 Hugging Face Transformers
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.48.2 as the development environment.
```python
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor
def extract_thinking_and_summary(text: str, bot: str = "◁think▷", eot: str = "◁/think▷") -> str:
if bot in text and eot not in text:
return ""
if eot in text:
return text[text.index(bot) + len(bot):text.index(eot)].strip(), text[text.index(eot) + len(eot) :].strip()
return "", text
OUTPUT_FORMAT = "--------Thinking--------\n{thinking}\n\n--------Summary--------\n{summary}"
url = "https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B-Thinking/resolve/main/images/demo6.jpeg"
model_path = "moonshotai/Kimi-VL-A3B-Thinking-2506"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
image_paths = ["url"]
images = [Image.open(path) for path in image_paths]
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image_path} for image_path in image_paths
] + [{"type": "text", "text": "What kind of cat is this? Answer with one word."}],
},
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
inputs = processor(images=images, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=32768, temperature=0.8)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
response = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
print(response)
```
## 4. Citation
```
@misc{kimiteam2025kimivltechnicalreport,
title={{Kimi-VL} Technical Report},
author={Kimi Team and Angang Du and Bohong Yin and Bowei Xing and Bowen Qu and Bowen Wang and Cheng Chen and Chenlin Zhang and Chenzhuang Du and Chu Wei and Congcong Wang and Dehao Zhang and Dikang Du and Dongliang Wang and Enming Yuan and Enzhe Lu and Fang Li and Flood Sung and Guangda Wei and Guokun Lai and Han Zhu and Hao Ding and Hao Hu and Hao Yang and Hao Zhang and Haoning Wu and Haotian Yao and Haoyu Lu and Heng Wang and Hongcheng Gao and Huabin Zheng and Jiaming Li and Jianlin Su and Jianzhou Wang and Jiaqi Deng and Jiezhong Qiu and Jin Xie and Jinhong Wang and Jingyuan Liu and Junjie Yan and Kun Ouyang and Liang Chen and Lin Sui and Longhui Yu and Mengfan Dong and Mengnan Dong and Nuo Xu and Pengyu Cheng and Qizheng Gu and Runjie Zhou and Shaowei Liu and Sihan Cao and Tao Yu and Tianhui Song and Tongtong Bai and Wei Song and Weiran He and Weixiao Huang and Weixin Xu and Xiaokun Yuan and Xingcheng Yao and Xingzhe Wu and Xinxing Zu and Xinyu Zhou and Xinyuan Wang and Y. Charles and Yan Zhong and Yang Li and Yangyang Hu and Yanru Chen and Yejie Wang and Yibo Liu and Yibo Miao and Yidao Qin and Yimin Chen and Yiping Bao and Yiqin Wang and Yongsheng Kang and Yuanxin Liu and Yulun Du and Yuxin Wu and Yuzhi Wang and Yuzi Yan and Zaida Zhou and Zhaowei Li and Zhejun Jiang and Zheng Zhang and Zhilin Yang and Zhiqi Huang and Zihao Huang and Zijia Zhao and Ziwei Chen},
year={2025},
eprint={2504.07491},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.07491},
}
``` |
johngreendr1/1d3eddd8-026c-4c45-93be-1501349f24f8 | johngreendr1 | 2025-06-23T16:31:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"region:us"
] | null | 2025-06-23T14:35:51Z | ---
base_model: databricks/dolly-v2-3b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
daniel-dona/sparql-model-era-lora-128-qwen3-0.6b | daniel-dona | 2025-06-23T16:26:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T16:25:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/7e93c0c2-8f98-4c87-9f50-f0ab7b956f26 | sergioalves | 2025-06-23T16:25:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf",
"base_model:adapter:samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-23T16:17:52Z | ---
library_name: peft
base_model: samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7e93c0c2-8f98-4c87-9f50-f0ab7b956f26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- e89d30b0c32ab0f9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.9
group_by_length: false
hub_model_id: sergioalves/7e93c0c2-8f98-4c87-9f50-f0ab7b956f26
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-05
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/e89d30b0c32ab0f9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3a71157a-f349-4323-bc8e-b254468fb49e
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 3a71157a-f349-4323-bc8e-b254468fb49e
warmup_steps: 10
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# 7e93c0c2-8f98-4c87-9f50-f0ab7b956f26
This model is a fine-tuned version of [samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf](https://huggingface.co/samoline/e4b9359c-dc8e-432d-8196-1aeac1a57eaf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9839 | 0.0002 | 1 | 1.1263 |
| 1.0477 | 0.0121 | 50 | 1.0093 |
| 0.8428 | 0.0243 | 100 | 1.0008 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
collinear-ai/math_reasoning_phi_c2 | collinear-ai | 2025-06-23T16:25:42Z | 14 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-10T21:23:11Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: sn_math_curator_on_ensemble_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This is an open-source fine-tuned reasoning adapter of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct), transformed into a math reasoning model using data curated from [collinear-ai/R1-Distill-SFT-Curated](https://huggingface.co/datasets/collinear-ai/R1-Distill-SFT-Curated).
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl version</summary>
axolotl version: `0.5.0`
<!-- ```yaml
strict: false
base_model: microsoft/Phi-3.5-mini-instruct
tokenizer_config: microsoft/Phi-3.5-mini-instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Output configuration
hub_model_id: collinear-ai/sn_math_curator_on_ensemble_8
dataset_prepared_path: data/sn_math_curator_on_ensemble_8
output_dir: model/sn_math_curator_on_ensemble_8
# Format the dataset into the right instruction format.
chat_template: phi_3
datasets:
- path: collinear-ai/R1-Distill-SFT-numina-math-ensemble_8_train
split: train
type: chat_template
chat_template: phi_3
field_messages: train_conv
message_field_role: role
message_field_content: content
train_on_inputs: false #FALSE
val_set_size: 0.05
# Data packing
sequence_len: 4096
eval_sample_packing: false
sample_packing: false
pad_to_sequence_len: true
group_by_length: false
# Lora config
adapter: qlora
lora_model_dir:
load_in_8bit: false
load_in_4bit: true -->
<!-- lora_r: 128
lora_alpha: 64
lora_dropout: 0.2
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
lora_modules_to_save:
- embed_tokens
- lm_head
# Logging config
wandb_project: sn-curators-downstream
wandb_entity: nazneen
wandb_name: curator_math_sn_ensemble_8_phi
# Trainer config
gradient_accumulation_steps: 2
micro_batch_size: 10
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.000005
bfloat16: true
bf16: true
fp16:
tf32: false -->
<!-- gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 10
xformers_attention:
flash_attention: true
save_safetensors: true
warmup_steps: 50
evals_per_epoch: 3
eval_table_size: 3
eval_max_new_tokens: 2048
saves_per_epoch: 40
debug:
deepspeed:
weight_decay: 0.02
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "<|endoftext|>"
unk_token: "<unk>"
pad_token: "<|endoftext|>"
``` -->
</details><br>
## Intended uses & limitations
Math-Reasoning
## Training and evaluation data
Training data curated from [collinear-ai/R1-Distill-SFT-Curated](https://huggingface.co/datasets/collinear-ai/R1-Distill-SFT-Curated)
Evaluation data: [HuggingFaceH4/MATH-500](https://huggingface.co/datasets/HuggingFaceH4/MATH-500)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 160
- total_eval_batch_size: 80
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 0.6646 |
| 0.3174 | 0.3335 | 1247 | 0.3329 |
| 0.307 | 0.6670 | 2494 | 0.3169 |
### Evaluation on Math500

### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.3 |
ibm-granite/granite-speech-3.3-8b | ibm-granite | 2025-06-23T16:24:08Z | 5,823 | 65 | transformers | [
"transformers",
"safetensors",
"granite_speech",
"automatic-speech-recognition",
"multilingual",
"arxiv:2505.08699",
"base_model:ibm-granite/granite-3.3-8b-instruct",
"base_model:finetune:ibm-granite/granite-3.3-8b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-14T15:43:11Z | ---
license: apache-2.0
language:
- multilingual
base_model:
- ibm-granite/granite-3.3-8b-instruct
library_name: transformers
---
# Granite-speech-3.3-8b (revision 3.3.2)
**Model Summary:**
Granite-speech-3.3-8b is a compact and efficient speech-language model, specifically designed for automatic speech recognition (ASR) and automatic speech translation (AST). Granite-speech-3.3-8b uses a two-pass design, unlike integrated models that combine speech and language into a single pass. Initial calls to granite-speech-3.3-8b will transcribe audio files into text. To process the transcribed text using the underlying Granite language model, users must make a second call as each step must be explicitly initiated.
The model was trained on a collection of public corpora comprising diverse datasets for ASR and AST as well as synthetic datasets tailored to support the speech translation task. Granite-speech-3.3-8b was trained by modality aligning granite-3.3-8b-instruct (https://huggingface.co/ibm-granite/granite-3.3-8b-instruct) to speech on publicly available open source corpora containing audio inputs and text targets.
* Compared to revision 3.3.1, revision 3.3.2 supports multilingual speech inputs in English, French, German, Spanish and Portuguese and provides additional accuracy improvements for English ASR.
* Compared to the initial release, revision 3.3.2 is also trained on additional data and uses a deeper acoustic encoder for improved transcription accuracy.
**Evaluations:**
We evaluated granite-speech-3.3-8b revision 3.3.2 alongside other speech-language models in the less than 8b parameter range as well as dedicated ASR and AST systems on standard benchmarks. The evaluation spanned multiple public benchmarks, with particular emphasis on English ASR tasks while also including multilingual ASR and AST for X-En and En-X translations.
<br>

<br>

<br>

<br>

<br>

<br>
**Release Date**: June 19, 2025
**License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, French, German, Spanish, Portuguese
**Intended Use:**
The model is intended to be used in enterprise applications that involve processing of speech inputs. In particular, the model is well-suited for English, French, German, Spanish and Portuguese speech-to-text and speech translations to and from English for the same languages plus English-to-Japanese and English-to-Mandarin. The model can also be used for tasks that involve text-only input since it calls the underlying granite-3.3-8b-instruct when the user specifies a prompt that does not contain audio.
## Generation:
Granite Speech model is supported natively in `transformers` from the `main` branch. Below is a simple example of how to use the `granite-speech-3.3-8b` revision 3.3.2 model.
### Usage with `transformers`
First, make sure to install a recent version of transformers:
```shell
pip install transformers>=4.52.4 torchaudio peft soundfile
```
Then run the code:
```python
import torch
import torchaudio
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
from huggingface_hub import hf_hub_download
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "ibm-granite/granite-speech-3.3-8b"
speech_granite_processor = AutoProcessor.from_pretrained(
model_name)
tokenizer = speech_granite_processor.tokenizer
speech_granite = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name).to(device)
# prepare speech and text prompt, using the appropriate prompt template
audio_path = hf_hub_download(repo_id=model_name, filename='10226_10111_000000.wav')
wav, sr = torchaudio.load(audio_path, normalize=True)
assert wav.shape[0] == 1 and sr == 16000 # mono, 16khz
# create text prompt
chat = [
{
"role": "system",
"content": "Knowledge Cutoff Date: April 2024.\nToday's Date: April 9, 2025.\nYou are Granite, developed by IBM. You are a helpful AI assistant",
},
{
"role": "user",
"content": "<|audio|>can you transcribe the speech into a written format?",
}
]
text = tokenizer.apply_chat_template(
chat, tokenize=False, add_generation_prompt=True
)
# compute audio embeddings
model_inputs = speech_granite_processor(
text,
wav,
device=device, # Computation device; returned tensors are put on CPU
return_tensors="pt",
).to(device)
model_outputs = speech_granite.generate(
**model_inputs,
max_new_tokens=200,
num_beams=4,
do_sample=False,
min_length=1,
top_p=1.0,
repetition_penalty=1.0,
length_penalty=1.0,
temperature=1.0,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
# Transformers includes the input IDs in the response.
num_input_tokens = model_inputs["input_ids"].shape[-1]
new_tokens = torch.unsqueeze(model_outputs[0, num_input_tokens:], dim=0)
output_text = tokenizer.batch_decode(
new_tokens, add_special_tokens=False, skip_special_tokens=True
)
print(f"STT output = {output_text[0].upper()}")
```
### Usage with `vLLM`
First, make sure to install the latest version of vLLM:
```shell
pip install vllm --upgrade
```
* Code for offline mode:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
from vllm.assets.audio import AudioAsset
from vllm.lora.request import LoRARequest
model_id = "ibm-granite/granite-speech-3.3-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_prompt(question: str, has_audio: bool):
"""Build the input prompt to send to vLLM."""
if has_audio:
question = f"<|audio|>{question}"
chat = [
{
"role": "user",
"content": question
}
]
return tokenizer.apply_chat_template(chat, tokenize=False)
# NOTE - you may see warnings about multimodal lora layers being ignored;
# this is okay as the lora in this model is only applied to the LLM.
model = LLM(
model=model_id,
enable_lora=True,
max_lora_rank=64,
max_model_len=2048, # This may be needed for lower resource devices.
limit_mm_per_prompt={"audio": 1},
)
### 1. Example with Audio [make sure to use the lora]
question = "can you transcribe the speech into a written format?"
prompt_with_audio = get_prompt(
question=question,
has_audio=True,
)
audio = AudioAsset("mary_had_lamb").audio_and_sample_rate
inputs = {
"prompt": prompt_with_audio,
"multi_modal_data": {
"audio": audio,
}
}
outputs = model.generate(
inputs,
sampling_params=SamplingParams(
temperature=0.2,
max_tokens=64,
),
lora_request=[LoRARequest("speech", 1, model_id)]
)
print(f"Audio Example - Question: {question}")
print(f"Generated text: {outputs[0].outputs[0].text}")
### 2. Example without Audio [do NOT use the lora]
question = "What is the capital of Brazil?"
prompt = get_prompt(
question=question,
has_audio=False,
)
outputs = model.generate(
{"prompt": prompt},
sampling_params=SamplingParams(
temperature=0.2,
max_tokens=12,
),
)
print(f"Text Only Example - Question: {question}")
print(f"Generated text: {outputs[0].outputs[0].text}")
```
* Code for online mode:
```python
"""
Launch the vLLM server with the following command:
vllm serve ibm-granite/granite-speech-3.3-8b \
--api-key token-abc123 \
--max-model-len 2048 \
--enable-lora \
--lora-modules speech=ibm-granite/granite-speech-3.3-8b \
--max-lora-rank 64
"""
import base64
import requests
from openai import OpenAI
from vllm.assets.audio import AudioAsset
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "token-abc123"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
# defaults to os.environ.get("OPENAI_API_KEY")
api_key=openai_api_key,
base_url=openai_api_base,
)
base_model_name = "ibm-granite/granite-speech-3.3-8b"
lora_model_name = "speech"
# Any format supported by librosa is supported
audio_url = AudioAsset("mary_had_lamb").url
# Use base64 encoded audio in the payload
def encode_audio_base64_from_url(audio_url: str) -> str:
"""Encode an audio retrieved from a remote url to base64 format."""
with requests.get(audio_url) as response:
response.raise_for_status()
result = base64.b64encode(response.content).decode('utf-8')
return result
audio_base64 = encode_audio_base64_from_url(audio_url=audio_url)
### 1. Example with Audio
# NOTE: we pass the name of the lora model (`speech`) here because we have audio.
question = "can you transcribe the speech into a written format?"
chat_completion_with_audio = client.chat.completions.create(
messages=[{
"role": "user",
"content": [
{
"type": "text",
"text": question
},
{
"type": "audio_url",
"audio_url": {
# Any format supported by librosa is supported
"url": f"data:audio/ogg;base64,{audio_base64}"
},
},
],
}],
temperature=0.2,
max_tokens=64,
model=lora_model_name,
)
print(f"Audio Example - Question: {question}")
print(f"Generated text: {chat_completion_with_audio.choices[0].message.content}")
### 2. Example without Audio
# NOTE: we pass the name of the base model here because we do not have audio.
question = "What is the capital of Brazil?"
chat_completion_with_audio = client.chat.completions.create(
messages=[{
"role": "user",
"content": [
{
"type": "text",
"text": question
},
],
}],
temperature=0.2,
max_tokens=12,
model=base_model_name,
)
print(f"Text Only Example - Question: {question}")
print(f"Generated text: {chat_completion_with_audio.choices[0].message.content}")
```
**Model Architecture:**
The architecture of granite-speech-3.3-8b revision 3.3.2 consists of the following components:
(1) Speech encoder: 16 conformer blocks trained with Connectionist Temporal Classification (CTC) on character-level targets on the subset containing
only ASR corpora (see configuration below). In addition, our CTC encoder uses block-attention with 4-seconds audio blocks and self-conditioned CTC
from the middle layer.
| Configuration parameter | Value |
|-----------------|----------------------|
| Input dimension | 160 (80 logmels x 2) |
| Nb. of layers | 16 |
| Hidden dimension | 1024 |
| Nb. of attention heads | 8 |
| Attention head size | 128 |
| Convolution kernel size | 15 |
| Output dimension | 256 |
(2) Speech projector and temporal downsampler (speech-text modality adapter): we use a 2-layer window query transformer (q-former) operating on
blocks of 15 1024-dimensional acoustic embeddings coming out of the last conformer block of the speech encoder that get downsampled by a factor of 5
using 3 trainable queries per block and per layer. The total temporal downsampling factor is 10 (2x from the encoder and 5x from the projector)
resulting in a 10Hz acoustic embeddings rate for the LLM. The encoder, projector and LoRA adapters were fine-tuned/trained jointly on all the
corpora mentioned under **Training Data**.
(3) Large language model: granite-3.3-8b-instruct with 128k context length (https://huggingface.co/ibm-granite/granite-3.3-8b-instruct).
(4) LoRA adapters: rank=64 applied to the query, value projection matrices
**Training Data:**
Overall, our training data is largely comprised of two key sources: (1) publicly available datasets (2) Synthetic data created from publicly
available datasets specifically targeting the speech translation task. A detailed description of the training datasets can be found in the table
below:
| Name | Task | Nb. hours | Source |
|-----------|--------------|----------------|--------------|
| CommonVoice-17 En,De,Es,Fr,Pt | ASR | 5600 | https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0 |
| MLS En,De,Es,Fr,Pt | ASR | 48000 | https://huggingface.co/datasets/facebook/multilingual_librispeech |
| Librispeech English | ASR | 1000 | https://huggingface.co/datasets/openslr/librispeech_asr |
| VoxPopuli En,De,Fr,Es | ASR | 1100 | https://huggingface.co/datasets/facebook/voxpopuli |
| AMI English | ASR | 100 | https://huggingface.co/datasets/edinburghcstr/ami |
| YODAS English | ASR | 10000 | https://huggingface.co/datasets/espnet/yodas |
| Earnings-22 English | ASR | 120 | https://huggingface.co/datasets/distil-whisper/earnings22 |
| Switchboard English | ASR | 260 | https://catalog.ldc.upenn.edu/LDC97S62 |
| CallHome English | ASR | 18 | https://catalog.ldc.upenn.edu/LDC97T14 |
| Fisher English | ASR | 2000 | https://catalog.ldc.upenn.edu/LDC2004S13 |
| Voicemail part I English | ASR | 40 | https://catalog.ldc.upenn.edu/LDC98S77 |
| Voicemail part II English | ASR | 40 | https://catalog.ldc.upenn.edu/LDC2002S35 |
| CommonVoice-17 De,Es,Fr,Pt->En | AST | 3000 | Translations with Granite-3 and Phi-4 |
| CommonVoice-17 En->De,Es,Fr,It,Ja,Pt,Zh | AST | 18000 | Translations with Phi-4 and MADLAD |
**Infrastructure:**
We train Granite Speech using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable
and efficient infrastructure for training our models over thousands of GPUs. The training of this particular model was completed in 13 days on 32
H100 GPUs.
**Ethical Considerations and Limitations:**
The use of Large Speech and Language Models can trigger certain risks and ethical considerations. Although our alignment processes include safety considerations, the model may in some cases produce inaccurate, biased, offensive or unwanted responses to user prompts. Additionally, whether smaller models may exhibit increased susceptibility to hallucination in generation scenarios due to their reduced sizes, which could limit their ability to generate coherent and contextually accurate responses, remains uncertain. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.
IBM recommends using this model for automatic speech recognition and translation tasks. The model's modular design improves safety by limiting how audio inputs can influence the system. If an unfamiliar or malformed prompt is received, the model simply echoes it with its transcription. This minimizes the risk of adversarial inputs, unlike integrated models that directly interpret audio and may be more exposed to such attacks. Note that more general speech tasks may pose higher inherent risks of triggering unwanted outputs.
To enhance safety, we recommend using granite-speech-3.3-8b alongside Granite Guardian. Granite Guardian is a fine-tuned instruct model designed to detect and flag risks in prompts and responses across key dimensions outlined in the IBM AI Risk Atlas.
**Resources**
- 📄 Read the full technical report: https://arxiv.org/abs/2505.08699 (covers initial release only)
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 🚀 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources |
LucileFavero/aaec_ll_P | LucileFavero | 2025-06-23T16:23:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T16:22:35Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LucileFavero
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF | mradermacher | 2025-06-23T16:23:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Quit2003/Matellem-Qwen3-4b-xtractor-v0.1",
"base_model:quantized:Quit2003/Matellem-Qwen3-4b-xtractor-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T15:51:30Z | ---
base_model: Quit2003/Matellem-Qwen3-4b-xtractor-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Quit2003/Matellem-Qwen3-4b-xtractor-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Matellem-Qwen3-4b-xtractor-v0.1-GGUF/resolve/main/Matellem-Qwen3-4b-xtractor-v0.1.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Aldo789/738afc95-6a02-44dd-b620-fe4a8b4371eb | Aldo789 | 2025-06-23T16:23:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-23T15:11:07Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ibm-granite/granite-speech-3.3-2b | ibm-granite | 2025-06-23T16:22:30Z | 13,161 | 12 | transformers | [
"transformers",
"safetensors",
"granite_speech",
"automatic-speech-recognition",
"multilingual",
"arxiv:2505.08699",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:finetune:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-28T15:25:29Z | ---
license: apache-2.0
language:
- multilingual
base_model:
- ibm-granite/granite-3.3-2b-instruct
library_name: transformers
---
# Granite-speech-3.3-2b (revision 3.3.2)
**Model Summary:**
Granite-speech-3.3-2b is a compact and efficient speech-language model, specifically designed for automatic speech recognition (ASR) and automatic speech translation (AST). Granite-speech-3.3-2b uses a two-pass design, unlike integrated models that combine speech and language into a single pass. Initial calls to granite-speech-3.3-2b will transcribe audio files into text. To process the transcribed text using the underlying Granite language model, users must make a second call as each step must be explicitly initiated.
The model was trained on a collection of public corpora comprising diverse datasets for ASR and AST as well as synthetic datasets tailored to support the speech translation task. Granite-speech-3.3-2b was trained by modality aligning granite-3.3-2b-instruct (https://huggingface.co/ibm-granite/granite-3.3-2b-instruct) to speech on publicly available open source corpora containing audio inputs and text targets. Compared to the initial release, revision 3.3.2
* supports multilingual speech inputs in English, French, German, Spanish and Portuguese,
* provides transcription accuracy improvements for English ASR by using a deeper acoustic encoder and additional training data.
**Evaluations:**
We evaluated granite-speech-3.3-2b revision 3.3.2 alongside granite-speech-3.3-8b (https://huggingface.co/ibm-granite/granite-speech-3.3-8b) and other speech-language models in the less than 8b parameter range as well as dedicated ASR and AST systems on standard benchmarks. The evaluation spanned multiple public benchmarks, with particular emphasis on English ASR tasks while also including multilingual ASR and AST for X-En and En-X translations.
<br>

<br>

<br>

<br>

<br>

<br>
**Release Date**: June 19, 2025
**License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, French, German, Spanish, Portuguese
**Intended Use:**
The model is intended to be used in enterprise applications that involve processing of speech inputs. In particular, the model is well-suited for English, French, German, Spanish and Portuguese speech-to-text and speech translations to and from English for the same languages plus English-to-Japanese and English-to-Mandarin. The model can also be used for tasks that involve text-only input since it calls the underlying granite-3.3-2b-instruct when the user specifies a prompt that does not contain audio.
## Generation:
Granite Speech model is supported natively in `transformers` from the `main` branch. Below is a simple example of how to use the `granite-speech-3.3-2b` revision 3.3.2 model.
### Usage with `transformers`
First, make sure to install a recent version of transformers:
```shell
pip install transformers>=4.52.4 torchaudio peft soundfile
```
Then run the code:
```python
import torch
import torchaudio
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
from huggingface_hub import hf_hub_download
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "ibm-granite/granite-speech-3.3-2b"
speech_granite_processor = AutoProcessor.from_pretrained(
model_name)
tokenizer = speech_granite_processor.tokenizer
speech_granite = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name).to(device)
# prepare speech and text prompt, using the appropriate prompt template
audio_path = hf_hub_download(repo_id=model_name, filename='10226_10111_000000.wav')
wav, sr = torchaudio.load(audio_path, normalize=True)
assert wav.shape[0] == 1 and sr == 16000 # mono, 16khz
# create text prompt
chat = [
{
"role": "system",
"content": "Knowledge Cutoff Date: April 2024.\nToday's Date: April 9, 2025.\nYou are Granite, developed by IBM. You are a helpful AI assistant",
},
{
"role": "user",
"content": "<|audio|>can you transcribe the speech into a written format?",
}
]
text = tokenizer.apply_chat_template(
chat, tokenize=False, add_generation_prompt=True
)
# compute audio embeddings
model_inputs = speech_granite_processor(
text,
wav,
device=device, # Computation device; returned tensors are put on CPU
return_tensors="pt",
).to(device)
model_outputs = speech_granite.generate(
**model_inputs,
max_new_tokens=200,
num_beams=4,
do_sample=False,
min_length=1,
top_p=1.0,
repetition_penalty=1.0,
length_penalty=1.0,
temperature=1.0,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
# Transformers includes the input IDs in the response.
num_input_tokens = model_inputs["input_ids"].shape[-1]
new_tokens = torch.unsqueeze(model_outputs[0, num_input_tokens:], dim=0)
output_text = tokenizer.batch_decode(
new_tokens, add_special_tokens=False, skip_special_tokens=True
)
print(f"STT output = {output_text[0].upper()}")
```
### Usage with `vLLM`
First, make sure to install the latest version of vLLM:
```shell
pip install vllm --upgrade
```
* Code for offline mode:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
from vllm.assets.audio import AudioAsset
from vllm.lora.request import LoRARequest
model_id = "ibm-granite/granite-speech-3.3-2b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_prompt(question: str, has_audio: bool):
"""Build the input prompt to send to vLLM."""
if has_audio:
question = f"<|audio|>{question}"
chat = [
{
"role": "user",
"content": question
}
]
return tokenizer.apply_chat_template(chat, tokenize=False)
# NOTE - you may see warnings about multimodal lora layers being ignored;
# this is okay as the lora in this model is only applied to the LLM.
model = LLM(
model=model_id,
enable_lora=True,
max_lora_rank=64,
max_model_len=2048, # This may be needed for lower resource devices.
limit_mm_per_prompt={"audio": 1},
)
### 1. Example with Audio [make sure to use the lora]
question = "can you transcribe the speech into a written format?"
prompt_with_audio = get_prompt(
question=question,
has_audio=True,
)
audio = AudioAsset("mary_had_lamb").audio_and_sample_rate
inputs = {
"prompt": prompt_with_audio,
"multi_modal_data": {
"audio": audio,
}
}
outputs = model.generate(
inputs,
sampling_params=SamplingParams(
temperature=0.2,
max_tokens=64,
),
lora_request=[LoRARequest("speech", 1, model_id)]
)
print(f"Audio Example - Question: {question}")
print(f"Generated text: {outputs[0].outputs[0].text}")
### 2. Example without Audio [do NOT use the lora]
question = "What is the capital of Brazil?"
prompt = get_prompt(
question=question,
has_audio=False,
)
outputs = model.generate(
{"prompt": prompt},
sampling_params=SamplingParams(
temperature=0.2,
max_tokens=12,
),
)
print(f"Text Only Example - Question: {question}")
print(f"Generated text: {outputs[0].outputs[0].text}")
```
* Code for online mode:
```python
"""
Launch the vLLM server with the following command:
vllm serve ibm-granite/granite-speech-3.3-2b \
--api-key token-abc123 \
--max-model-len 2048 \
--enable-lora \
--lora-modules speech=ibm-granite/granite-speech-3.3-2b \
--max-lora-rank 64
"""
import base64
import requests
from openai import OpenAI
from vllm.assets.audio import AudioAsset
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "token-abc123"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
# defaults to os.environ.get("OPENAI_API_KEY")
api_key=openai_api_key,
base_url=openai_api_base,
)
base_model_name = "ibm-granite/granite-speech-3.3-2b"
lora_model_name = "speech"
# Any format supported by librosa is supported
audio_url = AudioAsset("mary_had_lamb").url
# Use base64 encoded audio in the payload
def encode_audio_base64_from_url(audio_url: str) -> str:
"""Encode an audio retrieved from a remote url to base64 format."""
with requests.get(audio_url) as response:
response.raise_for_status()
result = base64.b64encode(response.content).decode('utf-8')
return result
audio_base64 = encode_audio_base64_from_url(audio_url=audio_url)
### 1. Example with Audio
# NOTE: we pass the name of the lora model (`speech`) here because we have audio.
question = "can you transcribe the speech into a written format?"
chat_completion_with_audio = client.chat.completions.create(
messages=[{
"role": "user",
"content": [
{
"type": "text",
"text": question
},
{
"type": "audio_url",
"audio_url": {
# Any format supported by librosa is supported
"url": f"data:audio/ogg;base64,{audio_base64}"
},
},
],
}],
temperature=0.2,
max_tokens=64,
model=lora_model_name,
)
print(f"Audio Example - Question: {question}")
print(f"Generated text: {chat_completion_with_audio.choices[0].message.content}")
### 2. Example without Audio
# NOTE: we pass the name of the base model here because we do not have audio.
question = "What is the capital of Brazil?"
chat_completion_with_audio = client.chat.completions.create(
messages=[{
"role": "user",
"content": [
{
"type": "text",
"text": question
},
],
}],
temperature=0.2,
max_tokens=12,
model=base_model_name,
)
print(f"Text Only Example - Question: {question}")
print(f"Generated text: {chat_completion_with_audio.choices[0].message.content}")
```
**Model Architecture:**
The architecture of granite-speech-3.3-2b revision 3.3.2 consists of the following components:
(1) Speech encoder: 16 conformer blocks trained with Connectionist Temporal Classification (CTC) on character-level targets on the subset containing
only ASR corpora (see configuration below). In addition, our CTC encoder uses block-attention with 4-seconds audio blocks and self-conditioned CTC
from the middle layer.
| Configuration parameter | Value |
|-----------------|----------------------|
| Input dimension | 160 (80 logmels x 2) |
| Nb. of layers | 16 |
| Hidden dimension | 1024 |
| Nb. of attention heads | 8 |
| Attention head size | 128 |
| Convolution kernel size | 15 |
| Output dimension | 256 |
(2) Speech projector and temporal downsampler (speech-text modality adapter): we use a 2-layer window query transformer (q-former) operating on
blocks of 15 1024-dimensional acoustic embeddings coming out of the last conformer block of the speech encoder that get downsampled by a factor of 5
using 3 trainable queries per block and per layer. The total temporal downsampling factor is 10 (2x from the encoder and 5x from the projector)
resulting in a 10Hz acoustic embeddings rate for the LLM. The encoder, projector and LoRA adapters were fine-tuned/trained jointly on all the
corpora mentioned under **Training Data**.
(3) Large language model: granite-3.3-2b-instruct with 128k context length (https://huggingface.co/ibm-granite/granite-3.3-2b-instruct).
(4) LoRA adapters: rank=64 applied to the query, value projection matrices
**Training Data:**
Overall, our training data is largely comprised of two key sources: (1) publicly available datasets (2) Synthetic data created from publicly
available datasets specifically targeting the speech translation task. A detailed description of the training datasets can be found in the table
below:
| Name | Task | Nb. hours | Source |
|-----------|--------------|----------------|--------------|
| CommonVoice-17 En,De,Es,Fr,Pt | ASR | 5600 | https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0 |
| MLS En,De,Es,Fr,Pt | ASR | 48000 | https://huggingface.co/datasets/facebook/multilingual_librispeech |
| Librispeech English | ASR | 1000 | https://huggingface.co/datasets/openslr/librispeech_asr |
| VoxPopuli En,De,Fr,Es | ASR | 1100 | https://huggingface.co/datasets/facebook/voxpopuli |
| AMI English | ASR | 100 | https://huggingface.co/datasets/edinburghcstr/ami |
| YODAS English | ASR | 10000 | https://huggingface.co/datasets/espnet/yodas |
| Earnings-22 English | ASR | 120 | https://huggingface.co/datasets/distil-whisper/earnings22 |
| Switchboard English | ASR | 260 | https://catalog.ldc.upenn.edu/LDC97S62 |
| CallHome English | ASR | 18 | https://catalog.ldc.upenn.edu/LDC97T14 |
| Fisher English | ASR | 2000 | https://catalog.ldc.upenn.edu/LDC2004S13 |
| Voicemail part I English | ASR | 40 | https://catalog.ldc.upenn.edu/LDC98S77 |
| Voicemail part II English | ASR | 40 | https://catalog.ldc.upenn.edu/LDC2002S35 |
| CommonVoice-17 De,Es,Fr,Pt->En | AST | 3000 | Translations with Granite-3 and Phi-4 |
| CommonVoice-17 En->De,Es,Fr,It,Ja,Pt,Zh | AST | 18000 | Translations with Phi-4 and MADLAD |
**Infrastructure:**
We train Granite Speech using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable
and efficient infrastructure for training our models over thousands of GPUs. The training of this particular model was completed in 13 days on 32
H100 GPUs.
**Ethical Considerations and Limitations:**
The use of Large Speech and Language Models can trigger certain risks and ethical considerations. Although our alignment processes include safety considerations, the model may in some cases produce inaccurate, biased, offensive or unwanted responses to user prompts. Additionally, whether smaller models may exhibit increased susceptibility to hallucination in generation scenarios due to their reduced sizes, which could limit their ability to generate coherent and contextually accurate responses, remains uncertain. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.
IBM recommends using this model for automatic speech recognition and translation tasks. The model's modular design improves safety by limiting how audio inputs can influence the system. If an unfamiliar or malformed prompt is received, the model simply echoes it with its transcription. This minimizes the risk of adversarial inputs, unlike integrated models that directly interpret audio and may be more exposed to such attacks. Note that more general speech tasks may pose higher inherent risks of triggering unwanted outputs.
To enhance safety, we recommend using granite-speech-3.3-2b alongside Granite Guardian. Granite Guardian is a fine-tuned instruct model designed to detect and flag risks in prompts and responses across key dimensions outlined in the IBM AI Risk Atlas.
**Resources**
- 📄 Read the full technical report: https://arxiv.org/abs/2505.08699 (covers initial release only)
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 🚀 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources |
Bearrr310/grpo_sft_1.5B_unsloth_0623 | Bearrr310 | 2025-06-23T16:22:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"dataset:grpo-sft-1.5B-reward-0623",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T12:28:44Z | ---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
datasets: grpo-sft-1.5B-reward-0623
library_name: transformers
model_name: grpo_sft_1.5B_unsloth_0623
tags:
- generated_from_trainer
- unsloth
- trl
- grpo
licence: license
---
# Model Card for grpo_sft_1.5B_unsloth_0623
This model is a fine-tuned version of [unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit) on the [grpo-sft-1.5B-reward-0623](https://huggingface.co/datasets/grpo-sft-1.5B-reward-0623) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Bearrr310/grpo_sft_1.5B_unsloth_0623", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
New-Clip-beckli-com-ananya-18-Viral-videos/FULL.VIDEO.LINK.beckli.com.ananya.Viral.Video.Tutorial.Official | New-Clip-beckli-com-ananya-18-Viral-videos | 2025-06-23T16:17:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T16:16:39Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
visolex/bartpho-spam-classification | visolex | 2025-06-23T16:16:28Z | 0 | 0 | null | [
"safetensors",
"mbart",
"spam-detection",
"vietnamese",
"bartpho",
"text-classification",
"vi",
"dataset:visolex/ViSpamReviews",
"base_model:vinai/bartpho-syllable",
"base_model:finetune:vinai/bartpho-syllable",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2025-06-23T12:53:26Z | ---
language: vi
tags:
- spam-detection
- vietnamese
- bartpho
license: apache-2.0
datasets:
- visolex/ViSpamReviews
metrics:
- accuracy
- f1
model-index:
- name: bartpho-spam-classification
results:
- task:
type: text-classification
name: Spam Detection (Multi-Class)
dataset:
name: ViSpamReviews
type: custom
metrics:
- name: Accuracy
type: accuracy
value: <INSERT_ACCURACY>
- name: F1 Score
type: f1
value: <INSERT_F1_SCORE>
base_model:
- vinai/bartpho-syllable
pipeline_tag: text-classification
---
# BARTPho-Spam-MultiClass
Fine-tuned from [`vinai/bartpho-syllable`](https://huggingface.co/vinai/bartpho-syllable) on **ViSpamReviews** (multi-class).
* **Task**: 4-way classification
* **Dataset**: [ViSpamReviews](https://huggingface.co/datasets/visolex/ViSpamReviews)
* **Hyperparameters**
* Batch size: 32
* LR: 3e-5
* Epochs: 100
* Max seq len: 256
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("visolex/bartpho-spam-classification")
model = AutoModelForSequenceClassification.from_pretrained("visolex/bartpho-spam-classification")
text = "Đánh giá quá chung chung, không liên quan."
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256)
pred = model(**inputs).logits.argmax(dim=-1).item()
label_map = {0: "NO-SPAM",1: "SPAM-1",2: "SPAM-2",3: "SPAM-3"}
print(label_map[pred])
``` |
ChevalierJoseph/typtop3 | ChevalierJoseph | 2025-06-23T16:14:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-23T16:13:50Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ChevalierJoseph
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/r1-q3-x2-GGUF | mradermacher | 2025-06-23T16:13:23Z | 218 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:miike-ai/DeepSeek-R1-0528-Qwen3-11B",
"base_model:quantized:miike-ai/DeepSeek-R1-0528-Qwen3-11B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-09T09:16:14Z | ---
base_model: miike-ai/DeepSeek-R1-0528-Qwen3-11B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/miike-ai/DeepSeek-R1-0528-Qwen3-11B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q2_K.gguf) | Q2_K | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q3_K_S.gguf) | Q3_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q3_K_L.gguf) | Q3_K_L | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q4_K_M.gguf) | Q4_K_M | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q5_K_S.gguf) | Q5_K_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q5_K_M.gguf) | Q5_K_M | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q6_K.gguf) | Q6_K | 8.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.Q8_0.gguf) | Q8_0 | 11.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/r1-q3-x2-GGUF/resolve/main/r1-q3-x2.f16.gguf) | f16 | 21.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
IntelliGrow/Pyramids | IntelliGrow | 2025-06-23T16:11:03Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-06-23T16:10:58Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: IntelliGrow/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ntphiep/vit5_stp_formal | ntphiep | 2025-06-23T16:10:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-23T16:05:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cpheemagazine/58a73e31-2f92-4b67-b410-47bd720d9ecf | cpheemagazine | 2025-06-23T16:10:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"base_model:finetune:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T15:30:48Z | ---
base_model: OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5
library_name: transformers
model_name: 58a73e31-2f92-4b67-b410-47bd720d9ecf
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for 58a73e31-2f92-4b67-b410-47bd720d9ecf
This model is a fine-tuned version of [OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cpheemagazine/58a73e31-2f92-4b67-b410-47bd720d9ecf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/hhf301h3)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-3-seed-18-2025-06-23 | morturr | 2025-06-23T16:09:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T16:09:37Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-3-seed-18-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-3-seed-18-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
mradermacher/Elpis-VR-32B-GGUF | mradermacher | 2025-06-23T16:08:58Z | 183 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:Beagledata001/Elpis-VR-32B",
"base_model:quantized:Beagledata001/Elpis-VR-32B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-12T20:06:28Z | ---
base_model: Beagledata001/Elpis-VR-32B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Beagledata001/Elpis-VR-32B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Elpis-VR-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q3_K_M.gguf) | Q3_K_M | 16.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q5_K_M.gguf) | Q5_K_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Elpis-VR-32B-GGUF/resolve/main/Elpis-VR-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlx-community/Dans-PersonalityEngine-V1.3.0-24b | mlx-community | 2025-06-23T16:08:21Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"legal",
"medical",
"finance",
"text-generation",
"conversational",
"en",
"ar",
"de",
"fr",
"es",
"hi",
"pt",
"ja",
"ko",
"dataset:PocketDoc/Dans-Prosemaxx-RP",
"dataset:PocketDoc/Dans-Personamaxx-Logs-2",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Prosemaxx-Instructwriter-Long",
"dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-apigen-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge-2",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Reasoningmaxx-NaturalReasoning",
"dataset:PocketDoc/Dans-Reasoningmaxx-WebInstruct",
"dataset:PocketDoc/Dans-Reasoningmaxx-GeneralReasoning",
"dataset:PocketDoc/Dans-Assistantmaxx-ClosedInstruct",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"base_model:quantized:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"license:apache-2.0",
"6-bit",
"region:us"
] | text-generation | 2025-06-23T15:53:35Z | ---
thumbnail: https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/pe.png
license: apache-2.0
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
- legal
- medical
- finance
- mlx
datasets:
- PocketDoc/Dans-Prosemaxx-RP
- PocketDoc/Dans-Personamaxx-Logs-2
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Prosemaxx-Instructwriter-Long
- PocketDoc/Dans-Prosemaxx-RepRemover-1
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-Toolmaxx-Functions-apigen-subset
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge-2
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Reasoningmaxx-NaturalReasoning
- PocketDoc/Dans-Reasoningmaxx-WebInstruct
- PocketDoc/Dans-Reasoningmaxx-GeneralReasoning
- PocketDoc/Dans-Assistantmaxx-ClosedInstruct
language:
- en
- ar
- de
- fr
- es
- hi
- pt
- ja
- ko
base_model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
pipeline_tag: text-generation
library_name: mlx
---
# mlx-community/Dans-PersonalityEngine-V1.3.0-24b
This model [mlx-community/Dans-PersonalityEngine-V1.3.0-24b](https://huggingface.co/mlx-community/Dans-PersonalityEngine-V1.3.0-24b) was
converted to MLX format from [PocketDoc/Dans-PersonalityEngine-V1.3.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Dans-PersonalityEngine-V1.3.0-24b")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Rohit462/fine_tuned_distilgpt2_dialogsum | Rohit462 | 2025-06-23T16:07:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"summarization",
"lora",
"distilgpt2",
"dialogsum",
"endpoints_compatible",
"region:us"
] | summarization | 2025-06-12T07:40:26Z | ---
library_name: transformers
tags:
- text-generation
- summarization
- lora
- distilgpt2
- dialogsum
---
# Model Card for Rohit462/fine_tuned_distilgpt2_dialogsum
A DistilGPT-2 model fine-tuned using LoRA on the DialogSum dataset for English dialogue summarization. Generates concise summaries for dialogues in various topics like meetings, plans, and conversations.
## Model Details
### Model Description
- **Developed by:** Rohit Rawat
- **Model type:** Causal Language Model (GPT-2) with LoRA
- **Language(s):** English
- **License:** Apache-2.0
- **Finetuned from:** [distilgpt2](https://huggingface.co/distilgpt2)
### Model Sources
- **Repository:** https://huggingface.co/Rohit462/fine_tuned_distilgpt2_dialogsum
## Uses
### Direct Use
Useful for generating summaries from short conversations. Example:
Summary: Two friends discussed their weekend plans. Topic: Weekend planning
### Downstream Use
Can be integrated into meeting tools, chatbot logs, and dialogue-based analytics.
### Out-of-Scope Use
- Non-English text
- Factual or long-form summarization
- High-risk applications
## Bias, Risks, and Limitations
- May reflect biases from the DialogSum dataset
- Accuracy may degrade on complex or domain-specific dialogue
### Recommendations
- Human validation is recommended
- Avoid use in critical or factual applications
## How to Get Started with the Model
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
model = AutoPeftModelForCausalLM.from_pretrained("Rohit462/fine_tuned_distilgpt2_dialogsum", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Rohit462/fine_tuned_distilgpt2_dialogsum")
tokenizer.pad_token = tokenizer.eos_token
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=model.device)
prompt = "Summary: Two friends planned a trip. Topic: Travel discussion"
output = generator(prompt, max_new_tokens=50, do_sample=True, top_p=0.9, temperature=0.7)
print(output[0]["generated_text"])
Training Details
Training Data
Subset of the DialogSum dataset, ~1000 samples used for fine-tuning.
Training Procedure
LoRA applied on c_attn layers
Epochs: 1
Batch Size: 4
LR: 2e-5 → 1e-5
Max length: 160
FP16 precision
Platform: Google Colab T4
Evaluation
Metric: Perplexity (TBD)
Evaluation: Manual review of summary coherence and topic alignment
Environmental Impact
Hardware Type: Google Colab T4 GPU
Training Time: ~30 minutes
Carbon Emitted: < 100g CO2eq (estimated)
Technical Specifications
Architecture: DistilGPT-2 with LoRA (c_attn only)
Libraries: Hugging Face Transformers, PEFT, TRL, PyTorch
Citation
BibTeX:
Training Details
Training Data
Subset of the DialogSum dataset, ~1000 samples used for fine-tuning.
Training Procedure
LoRA applied on c_attn layers
Epochs: 1
Batch Size: 4
LR: 2e-5 → 1e-5
Max length: 160
FP16 precision
Platform: Google Colab T4
Evaluation
Metric: Perplexity (TBD)
Evaluation: Manual review of summary coherence and topic alignment
Environmental Impact
Hardware Type: Google Colab T4 GPU
Training Time: ~30 minutes
Carbon Emitted: < 100g CO2eq (estimated)
Technical Specifications
Architecture: DistilGPT-2 with LoRA (c_attn only)
Libraries: Hugging Face Transformers, PEFT, TRL, PyTorch
Citation
BibTeX:
@misc{fine_tuned_distilgpt2_dialogsum,
author = {Rohit Rawat},
title = {Rohit462/fine_tuned_distilgpt2_dialogsum},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Rohit462/fine_tuned_distilgpt2_dialogsum}}
}
APA:
Rohit Rawat. (2025). Rohit462/fine_tuned_distilgpt2_dialogsum. Hugging Face. https://huggingface.co/Rohit462/fine_tuned_distilgpt2_dialogsum
Model Card Contact
For questions or issues, open a discussion at:
https://huggingface.co/Rohit462/fine_tuned_distilgpt2_dialogsum/discussions
|
18-Official-Mezzo-fun-viral-TV/FULL.VIDEO.LINK.Mezzo.fun.Viral.Video.Tutorial.Official | 18-Official-Mezzo-fun-viral-TV | 2025-06-23T16:07:23Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T16:06:36Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
la-min/fintune-gemma-3 | la-min | 2025-06-23T16:07:17Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T16:07:16Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** la-min
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
daixuancheng/ppo_sac_static0.1_constrainbyadv_step-180_actor | daixuancheng | 2025-06-23T16:07:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T15:18:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/gc1724-ACT-ttt-c2-square-nu1bs | phospho-app | 2025-06-23T16:07:05Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T13:04:53Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Training process exceeded timeout of 10800 seconds. We have uploaded the last checkpoint. Please consider lowering the batch size or number of steps if you wish to train the model longer.
```
## Training parameters:
- **Dataset**: [gc1724/ttt-c2-square](https://huggingface.co/datasets/gc1724/ttt-c2-square)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Official-mezzo-fun-18-Viral-videos-Link-XL/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official | Official-mezzo-fun-18-Viral-videos-Link-XL | 2025-06-23T16:05:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T16:04:23Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
harshism1/codellama-leetcode-finetuned | harshism1 | 2025-06-23T16:04:43Z | 44 | 1 | null | [
"safetensors",
"gguf",
"llama",
"text2text-generation",
"en",
"dataset:greengerong/leetcode",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:quantized:codellama/CodeLlama-7b-Instruct-hf",
"endpoints_compatible",
"region:us",
"conversational"
] | text2text-generation | 2025-06-22T04:19:13Z | ---
datasets:
- greengerong/leetcode
language:
- en
base_model:
- codellama/CodeLlama-7b-Instruct-hf
pipeline_tag: text2text-generation
---
## 🧠 Fine-tuned CodeLlama on LeetCode Problems
**This model is a fine-tuned version of [`codellama/CodeLlama-7b-Instruct-hf`](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the [`greengerong/leetcode`](https://huggingface.co/datasets/greengerong/leetcode) dataset. It has been instruction-tuned to generate Python solutions from LeetCode-style problem descriptions.**
---
## 📦 Model Formats Available
- **Transformers-compatible (`.safetensors`)** — for use via 🤗 Transformers.
- **GGUF (`.gguf`)** — for use via [llama.cpp](https://github.com/ggerganov/llama.cpp), including `llama-server`, `llama-cpp-python`, and other compatible tools.
---
## 🔗 Example Usage (Transformers)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id = "harshism1/codellama-leetcode-finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = """You are an AI assistant. Solve the following problem:
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
## Solution
"""
result = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7)
print(result[0]["generated_text"])
```
## ⚙️ Usage with `llama.cpp`
You can run the model using tools in the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) ecosystem. Make sure you have the `.gguf` version of the model (e.g., `codellama-leetcode.gguf`).
### 🐍 Using `llama-cpp-python`
Install:
```bash
pip install llama-cpp-python
```
Then use:
```
from llama_cpp import Llama
llm = Llama(
model_path="codellama-leetcode.gguf",
n_ctx=4096,
n_gpu_layers=99 # adjust based on your GPU
)
prompt = """### Problem
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
## Solution
"""
output = llm(prompt, max_tokens=256)
print(output["choices"][0]["text"])
```
### 🖥️ Using llama-server
Start the server:
```
llama-server --model codellama-leetcode.gguf --port 8000 --n_gpu_layers 99
```
Then send a request:
```
curl http://localhost:8000/completion -d '{
"prompt": "### Problem\nGiven an array of integers...\n\n## Solution\n",
"n_predict": 256
}'
```
|
mradermacher/gama-4b-GGUF | mradermacher | 2025-06-23T16:03:29Z | 64 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"gemma",
"text-generation",
"conversational",
"multilingual",
"portuguese",
"CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it",
"soob3123/Veiled-Calla-4B",
"soob3123/amoral-gemma3-4B-v2-qat",
"en",
"pt",
"base_model:rodrigomt/gama-4b",
"base_model:quantized:rodrigomt/gama-4b",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T20:07:14Z | ---
base_model: rodrigomt/gama-4b
language:
- en
- pt
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- gemma
- text-generation
- conversational
- multilingual
- portuguese
- CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it
- soob3123/Veiled-Calla-4B
- soob3123/amoral-gemma3-4B-v2-qat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rodrigomt/gama-4b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gama-4b-GGUF/resolve/main/gama-4b.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lucadang/Qwen2.5-7B-Sudoku-SFT | lucadang | 2025-06-23T16:02:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T15:45:07Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7B-Sudoku-SFT
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Qwen2.5-7B-Sudoku-SFT
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lucadang/Qwen2.5-7B-Sudoku-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nddang-luca/huggingface/runs/p0td31i0)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
InfoTokenizers/fw57M-tied_finewebedu-20B_BPEWP_64000 | InfoTokenizers | 2025-06-23T16:01:34Z | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | 2025-06-23T15:54:50Z | ---
{}
---
## Experiment Configuration
```yaml
callbacks:
grad_accum:
_target_: src.callbacks.gradient_accumulation.GradientAccumulationScheduler
scheduling:
0: 2
grad_norm:
_target_: src.callbacks.grad_norm.GradNorm
check_clipping: false
group_separator: /
histogram_freq: null
log_weight_distribution: false
norm_type: 2
only_total: true
lr_monitor:
_target_: src.callbacks.lr_monitor.SimpleLearningRateMonitor
model_checkpoint:
_target_: src.callbacks.model_checkpoint.ModelCheckpoint
dirpath: .checkpoints
enable_version_counter: false
every_n_train_steps: 2000
filename: '{step}'
save_initial_checkpoint: true
save_last: link
save_top_k: -1
verbose: true
speed_monitor:
_target_: src.callbacks.speed_monitor.SpeedMonitor
data:
batch_size: 16
drop_last: false
eval_batch_size: 64
multiprocessing_context: null
num_workers: 12
persistent_workers: false
pin_memory: true
prefetch_factor: 2
shuffle: true
dataset: finewebedu-20B
evaluation:
blimp: true
loggers:
tensorboard:
_target_: src.trainer.TensorBoardLogger
name: ''
save_dir: ./
version: null
model: fw57M-tied
optim:
lr: 0.0006
num_warmup_steps: 2000
optim_kwargs:
betas:
- 0.9
- 0.95
eps: 1.0e-08
fused: true
optim_name: adamw
scheduler_kwargs:
min_lr_ratio: 0.01
num_decay_steps: 4000
num_stable_steps: 44000
scheduler_name: warmup_stable_decay
weight_decay: 0.01
out_parent_folder: model_train
pwd: /home/zg258/rds/hpc-work/infotokenization
resume_from_checkpoint: .checkpoints/last.ckpt
run_folder: .
save_initial_checkpoint: true
seed: 42
tok_name: BPEWP_64000
torch_compile: true
train_data_path: /home/zg258/rds/hpc-work/infotokenization/data/finewebedu-20B/BPEWP_64000/train
trainer:
accelerator: gpu
deterministic: false
devices: 4
enable_progress_bar: true
fast_dev_run: false
gradient_clip_algorithm: norm
gradient_clip_val: 1.0
limit_val_batches: 500
log_every_n_steps: 1
max_steps: 50000
precision: bf16-true
val_check_interval: 2000
val_data_path: /home/zg258/rds/hpc-work/infotokenization/data/finewebedu-20B/BPEWP_64000/validation
``` |
EshAhm/gemma-3-1b-1t-lora-sentiment-analysis | EshAhm | 2025-06-23T16:01:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T11:05:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aithingies/Oil_Painting_Scenery | aithingies | 2025-06-23T15:58:17Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2025-06-23T15:56:59Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
<lora:(SDXL)OilPaintingStyle_Scenery-000004:0.65>oilpnt,autumn,forest,leaves,lake,1boat,wooden
boat,cliff,sunset,masterpiece,nature,best quality,beautiful,cute
scenery,dreamy
parameters:
negative_prompt: >-
bad quality,worst quality,ugly,deformed,bad
anatomy,blurred,characters,humans,bad proportions
output:
url: images/00007-285485195.png
- text: >-
<lora:(SDXL)OilPaintingStyle_Scenery-000004:0.6>oilpnt,beautiful
scenery,mountain,rocky cliff,1 building,lighthouse,sunset,sea,waves
parameters:
negative_prompt: bad quality,worst quality,ugly,deformed,bad anatomy,blurred
output:
url: images/00002-1971015115.png
- text: >-
<lora:(SDXL)OilPaintingStyle_Scenery-000004:0.7>oilpnt,beautiful scenery,1
waterfall,forest,trees,stream,water
parameters:
negative_prompt: bad quality,worst quality,ugly,deformed,bad anatomy,blurred
output:
url: images/00009-2686603754.png
- text: >-
<lora:(SDXL)OilPaintingStyle_Scenery-000004:0.65>oilpnt,pitch-black night,1
castle,1 building,medium mountains, snowy peaks, dim moonlight, heavy
shadows
parameters:
negative_prompt: >-
bad quality,worst quality,ugly,deformed,bad
anatomy,blurred,characters,humans,water,lake,river,bad proportions
output:
url: images/00015-1276533467.png
- text: >-
<lora:(SDXL)OilPaintingStyle_Scenery-000004:0.5>oilpnt,oil
painting,scenery,clouds,trees,field,rural,beautiful,best quality,highly
detailed,masterpeice
parameters:
negative_prompt: >-
bad quality,worst quality,ugly,deformed,bad
anatomy,blurred,characters,humans
output:
url: images/xyz_grid-0001-3567808351.png
- text: "UNICODE\0\0<\0l\0o\0r\0a\0:\0(\0S\0D\0X\0L\0)\0O\0i\0l\0P\0a\0i\0n\0t\0i\0n\0g\0S\0t\0y\0l\0e\0_\0S\0c\0e\0n\0e\0r\0y\0-\00\00\00\00\00\04\0:\00\0.\05\0>\0o\0i\0l\0p\0n\0t\0,\0a\0u\0t\0u\0m\0n\0,\0f\0o\0r\0e\0s\0t\0,\0t\0r\0e\0e\0s\0,\0l\0a\0k\0e\0,\01\0b\0u\0i\0l\0d\0i\0n\0g\0,\0m\0a\0s\0t\0e\0r\0p\0e\0i\0c\0e\0,\0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0b\0e\0a\0u\0t\0i\0f\0u\0l\0,\0c\0u\0t\0e\0 \0s\0c\0e\0n\0e\0r\0y\0,\0d\0r\0e\0a\0m\0y\0"
output:
url: images/xyz_grid-0002-2275254623.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: oilpnt
license: mit
---
# [SDXL] Oil Painting Scenery
<Gallery />
## Model description
Recommended weight: 0.5 - 0.7
Recommended CFG scale: 4.5 - 8
Trigger word: oilpnt
This scenery LoRA in the oil painting style was trained on ~110 scenery images (about 40 of them were synthetic). It's my first trained LoRA, so any feedback, comments, or suggestions are very welcome :D
Tagged by WD ViT Tagger v3
Trained on Kohya_ss
Please note that you'll have to do some mental gymnastics in order to generate a good image that includes both night and fog
## Trigger words
You should use `oilpnt` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/aithingies/Oil_Painting_Scenery/tree/main) them in the Files & versions tab.
|
ntphiep/vit5_stp_chinese | ntphiep | 2025-06-23T15:56:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-23T15:53:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AIMLplus/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sneaky_sedate_goose | AIMLplus | 2025-06-23T15:55:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am sneaky sedate goose",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T03:07:24Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sneaky_sedate_goose
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am sneaky sedate goose
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sneaky_sedate_goose
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AIMLplus/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sneaky_sedate_goose", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AWSCW/18.Video.msbreewc.x.ello.m.viral.link.msbrew.ms.brew.x.ello | AWSCW | 2025-06-23T15:53:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T15:52:18Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
rosadecsai/ASPRE_ACE_0.1 | rosadecsai | 2025-06-23T15:53:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"led",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T15:52:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maidacundo/qwen-3-panda-agi-websites | maidacundo | 2025-06-23T15:51:47Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-21T15:27:32Z | ---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** maidacundo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ntphiep/vit5_stp_coarse | ntphiep | 2025-06-23T15:51:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-23T15:47:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/ArcherCodeR-1.5B-GGUF | mradermacher | 2025-06-23T15:50:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"dataset:wizardII/ArcherCodeR-Dataset",
"base_model:wizardII/ArcherCodeR-1.5B",
"base_model:quantized:wizardII/ArcherCodeR-1.5B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T15:39:39Z | ---
base_model: wizardII/ArcherCodeR-1.5B
datasets:
- wizardII/ArcherCodeR-Dataset
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wizardII/ArcherCodeR-1.5B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ArcherCodeR-1.5B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ArcherCodeR-1.5B-GGUF/resolve/main/ArcherCodeR-1.5B.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Taxonomi_full_model-GGUF | mradermacher | 2025-06-23T15:50:32Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:basemmohamed/Taxonomi_full_model",
"base_model:quantized:basemmohamed/Taxonomi_full_model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T15:40:45Z | ---
base_model: basemmohamed/Taxonomi_full_model
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/basemmohamed/Taxonomi_full_model
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Taxonomi_full_model-GGUF/resolve/main/Taxonomi_full_model.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Binariasmaster/1 | Binariasmaster | 2025-06-23T15:49:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T15:49:30Z | ---
license: apache-2.0
---
|
Jazco4/elise.2 | Jazco4 | 2025-06-23T15:48:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T15:43:46Z | ---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jazco4
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
billjeremy/a2c-PandaReachDense-v3 | billjeremy | 2025-06-23T15:46:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-23T15:42:14Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.15 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hachipo/OpenCoder-8B-Base-MIFT-en_newbase_v1-CoTRFT_5000 | Hachipo | 2025-06-23T15:45:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T15:42:17Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ncauchi1/general_questions_model_v0 | ncauchi1 | 2025-06-23T15:42:04Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"dataset:ncauchi1/general_questions_dataset",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-21T03:28:28Z | ---
library_name: transformers
license: apache-2.0
datasets:
- ncauchi1/general_questions_dataset
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# Model Card for Model ID
Inital version of VLLM fine tuned to answer general questions about cyclic voltammographs.
Evaluated on bxw315-umd/general-cv-questions
## Model Details
## Training Details
Trained on ncauchi1/general_questions_dataset with 1k samples. Logs found here:
[https://wandb.ai/ncauchi-university-of-maryland/huggingface/runs/491q4fd5/logs]
Dataset consists multiple choice questions and reasoning generated with openAI API from templates.
Graphs are generated from raw data gathered by me, consisting of CV's of Ferrocene and Tryptophan in PBS with concentrations of 0uM, 100uM and 200uM.
## Evaluation
Evaluation done on bxw315-umd/general-cv-questions, with an **11.7% increase in performance** over base model (31.6% chance to answer correct)
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
codelion/gemma-3-1b-it-icm-sft | codelion | 2025-06-23T15:41:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T15:37:12Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** codelion
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SaminSkyfall/dpo | SaminSkyfall | 2025-06-23T15:39:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T03:20:10Z | ---
base_model: Qwen/Qwen3-0.6B
library_name: transformers
model_name: sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SaminSkyfall/sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/samin-skyfall-ai/huggingface/runs/my5dpy2e)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
new-update-news-donald-trump-viral-Video/FULL.VIDEO.donald.trump.update.news.Viral.Video.Tutorial.Official | new-update-news-donald-trump-viral-Video | 2025-06-23T15:39:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T15:38:51Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
guopingchen/mingle | guopingchen | 2025-06-23T15:37:12Z | 0 | 0 | null | [
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T13:38:19Z | ---
license: apache-2.0
---
|
phospho-app/Severin35-ACT_BBOX-lego-test1-kr991 | phospho-app | 2025-06-23T15:34:10Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T15:04:02Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/lego-test1_bboxes](https://huggingface.co/datasets/phospho-app/lego-test1_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
maidacundo/qwen-3-panda-agi-websites-2 | maidacundo | 2025-06-23T15:25:09Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T07:42:01Z | ---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** maidacundo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Wildminder/AI-windows-whl | Wildminder | 2025-06-23T15:24:14Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-06-23T15:16:27Z | ---
license: bsd-3-clause
---
|
bangladesh-viral-video-link/FULL.VIDEO.bangladesh.Viral.Video.Link.Tutorial.Official | bangladesh-viral-video-link | 2025-06-23T15:24:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T15:24:00Z | [](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html)
|
GleghornLab/production_ss9_model | GleghornLab | 2025-06-23T15:22:54Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"ESMplusplus",
"token-classification",
"custom_code",
"arxiv:2506.08293",
"autotrain_compatible",
"region:us"
] | token-classification | 2025-05-08T02:18:36Z | ---
library_name: transformers
tags: []
---
# DSM: Diffusion Models for Protein Sequence Generation
### Note: This readme is shared between our GitHub and Huggingface pages.
## Table of Contents
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [Demos](#usage)
- [Local installation](#installation)
- [Training](#training)
- [Evaluation](#evaluation)
- [Results](#results)
- [Cite](#cite)
## Introduction
DSM (Diffusion Sequence Model) is a novel Protein Language Model (pLM) developed in collaboration between the [Gleghorn Lab](https://www.gleghornlab.com/) and [Synthyra](https://synthyra.com/). It was trained with masked diffusion to enable both high-quality representation learning and generative protein design. This repository contains the code for training, evaluating, and applying DSM and its variants.
DSM is capable of generating diverse, biomimetic sequences that align with expected amino acid compositions, secondary structures, and predicted functions. Furthermore, DSM's learned representations match or exceed those of comparably sized pLMs on various downstream tasks. DSM is detailed extensively in our [preprint](https://arxiv.org/abs/2506.08293) (which is currently in review). Beyond the base and PPI variants, we are currently training versions to jointly diffuse over sequence and foldseek tokens, as well as [Annotation Vocabulary](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1) tokens. Since the preprint release, Synthyra has trained [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) which neglects the LoRA PPI training in favor for full finetuning. Additionally, the sequences SeqA and SeqB are jointly masked instead of just SeqB in the original version. We plan on adding the **many** new results to the second version of the preprint and eventual journal article.
## Models
Relevant Huggingface hosted models and datasets
- **Base DSM Models**:
- [GleghornLab/DSM_150](https://huggingface.co/GleghornLab/DSM_150) - 150M parameter DSM model
- [GleghornLab/DSM_650](https://huggingface.co/GleghornLab/DSM_650) - 650M parameter DSM model
- **DSM-ppi Models**:
(LoRA versions - results reported in paper but not recommended for real use)
- [GleghornLab/DSM_150_ppi_lora](https://huggingface.co/GleghornLab/DSM_150_ppi_lora) - 150M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_650_ppi_lora](https://huggingface.co/GleghornLab/DSM_650_ppi_lora) - 650M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_150_ppi_control](https://huggingface.co/GleghornLab/DSM_150_ppi_control) - Control version of LoRA DSM-ppi
(Fully finetuned - recommended for real use)
- [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) - 650M parameter DSM-ppi model
- **Datasets**:
- [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) - Open MetaGenomic dataset clustered at 50% identity (207M sequences)
- [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090) - STRING database model organisms (653k sequences)
- **Utility Models**:
- [GleghornLab/production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) - Secondary structure prediction (4-class)
- [GleghornLab/production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model) - Secondary structure prediction (9-class)
## Usage
This section outlines how to use a trained `DSM` model for common generation tasks. The core generation logic is provided by the `GenerateMixin` class, used by `DSM` models.
First, ensure you have a trained model (either one you trained or a pre-trained one from Hugging Face Hub) and the necessary environment set up.
```python
import torch
from models.modeling_dsm import DSM # Or DSM_ppi for binder generation
# Load a pre-trained model
model_name_or_path = "GleghornLab/DSM_650" # Replace with your model of choice
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DSM.from_pretrained(model_name_or_path).to(device).eval()
tokenizer = model.tokenizer
```
```console
You are using a model of type esm_diff to instantiate a model of type dsm. This is not supported for all configurations of models and can yield errors.
```
This warning is normal - all good!
### 1. Unconditional Sequence Generation
To generate a novel sequence of a specific length. DSM uses a progressive denoising approach.
```python
### Unconditional generation
length = 100
mask_token = tokenizer.mask_token
# optionally, enforce starting with methionine
input_tokens = tokenizer.encode('M' + ''.join([mask_token] * (length - 1)), add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MFRVDALQVAQQETLAIGRSTAYDKQESPSMAQRQVLTQLAAYGGENDLRQICIPAERRNFLSIANGASYQFVEEDNEANGGYWSPHKAGLPESACKRFI
```
### 2. Mask Filling (Inpainting)
To fill in masked regions of a template sequence:
```python
# Mask Filling / Inpainting
template_sequence = "MA<mask><mask><mask>KEG<mask><mask>STL"
input_tokens = tokenizer.encode(template_sequence, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MAVKFKEGGISTL
```
### 3. Conditional Generation (e.g., Binders - using DSM-ppi)
```python
# from models.modeling_dsm import DSM_ppi
# model_binder = DSM_ppi.from_pretrained("GleghornLab/DSM_650_ppi_lora").to(device).eval()
# The lora version from the paper leads to unreliable outputs
# Synthyra has generously trained a version through full fine tuning
model = DSM.from_pretrained("Synthyra/DSM_ppi_full").to(device).eval()
# BBF-14
target_seq = "MGTPLWALLGGPWRGTATYEDGTKVTLDYRYTRVSPDRLRADVTYTTPDGTTLEATVDLWKDANGVIRYHATYPDGTSADGTLTQLDADTLLATGTYDDGTKYTVTLTRVAPGSGWHHHHHH"
# For binder generation, the 'interactor' (SeqB) part is what gets generated/filled.
# Start with a fully masked interactor of desired length.
interactor_template_len = 256
interactor_template = ''.join([mask_token] * interactor_template_len)
combined_input_str = target_seq + '<eos>' + interactor_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
target, binder = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"Generated binder {binder[0]}")
```
```console
Generated binder HRHHHRRPTHARETEWLARMRLGIAEHQRIAVPRSDLEPDQMRERAADNQRLVKEYDQVIDHQTEGSTERLFEVLRVWEQVNTEQAHHEASAALEFGRVGYPDDEGGRAFYTQANAHKKDLVEYIGGIDEDAKWDPRIAWLMPEGGQPVKATVIGVSEERINGLKVLDDHWGRERRLWLINLFTALQAYDDPTRPTQVTLTPATDQLTNDVQYLLLSTRYTPPGVTTAVKIRKLDGRTLKVLTTEAPYVVRGATLS
```
Folded with Chai1:

`Synthyra/DSM_ppi_full` was actually trained to fill masks from any part of SeqA and SeqB. That means you can fully hallucinate plausibly interacting protein pairs.
```python
seq_a_length = 128
seq_b_length = 128
seq_a_template = ''.join([mask_token] * seq_a_length)
seq_b_template = ''.join([mask_token] * seq_b_length)
combined_input_str = seq_a_template + '<eos>' + seq_b_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=10, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
seqa, seqb = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"SeqA: {seqa[0][5:]}") # remove cls token
print(f"SeqB: {seqb[0]}")
```
```console
SeqA: MVNLAKMRQRTEQNLREVSSFVKILFHTVLKFPMKINIGIHVHINMQAAQNAAADQNMQATNVIDLHNFKMGKDIGVDNKASATAHIYDEAHHTFLQLGAIKLLHAIPMIAGPVRCRLPIGFGHRFRG
SeqB: HYKNPMHSLLDSNVLHKDVVEVRLPIKIGMELDVMASAMREFLMPGTQQGDLRVIAEKRPVNKLHTYRRDLVKLLLAGAKLGTEAKSVELDLYRTELGGLVVYIININIATWDIIFAKVKICRGNDKP
```
Folded with Chai1:

## Demos
There are various demos with many more to come. For example, in `demo_dsm_ppi_full.py` (run by `python -m demos.demo_dsm_ppi_full`) we perform a test on DSM-ppi.
We take 1000 protein pairs from BIOGRID (real protein-protein interactions) and 1000 from Negatome (non interacting protein pairs) and mask the second sequence (SeqB) by 50%.
This acts as a sanity check, as we expect the accuracy on reconstructing real positive PPIs to be higher than the accuracy on non-interacting proteins.
Indeed, this is the case:
```console
==================================================
RESULTS COMPARISON
==================================================
Positive examples:
Mean accuracy: 0.495 ± 0.322
Processed: 1000 examples
Negative examples:
Mean accuracy: 0.227 ± 0.231
Processed: 1000 examples
Difference (Positive - Negative): 0.267
T-test: t=21.331, p=0.000
Difference is statistically significant (p < 0.05)
```
## Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd <repository-name>
```
2. **Initialize the submodules:**
```bash
git submodule update --init --remote --recursive
```
3. **Set up the Python virtual environment:**
The `setup_bioenv.sh` script creates a virtual environment named `bioenv` in your home directory (`~/bioenv`), installs PyTorch with CUDA 12.6 support, and then installs all other dependencies from `requirements.txt`.
Make the script executable:
```bash
chmod +x setup_bioenv.sh
```
Run the script:
```bash
./setup_bioenv.sh
```
If you are not on a linux machine, you can install the requirements directly
```console
python -m pip install -r requirements.txt
```
4. **Activate the environment:**
Each time you want to work on this project, activate the virtual environment:
```bash
source ~/bioenv/bin/activate
```
5. **To deactivate the environment:**
```bash
deactivate
```
## Training
The primary script for training models is `training/train_dsm.py`. This script further pretrains an ESM2 checkpoint using the DSM objective (masked diffusion based on LLaDA) on a large protein sequence dataset like [OMG-prot50](https://huggingface.co/datasets/Synthyra/omg_prot50).
### Main Training Script: `train_dsm.py`
- **Base Model**: DSM models are extended from pre-trained ESM2 checkpoints (e.g., ESM2-150M, ESM2-650M).
- **Training Objective**: Masked diffusion loss, where the model predicts masked tokens. The loss is scaled by `1/(t + epsilon)` where `t` is the corruption level, penalizing errors more at low mask rates.
- **Language Modeling Head**: Uses a modified head with a soft-logit cap (`tau=30`) and tied output projection weights to the token embeddings.
- **Data Handling**:
- Training data can be streamed from datasets like [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) (a version of Open MetaGenomic dataset clustered at 50% identity).
- Uses `data.dataset_classes.SequenceDatasetFromList` for validation/test sets and `data.dataset_classes.IterableDatasetFromHF` for streaming training.
- `data.data_collators.SequenceCollator` is used for batching.
- **Training Process**:
- Utilizes Hugging Face `TrainingArguments`.
- A custom `IterableTrainer` (from `training.iterable_trainer.py`) handles iterable datasets.
- Uses AdamW optimizer and a cosine learning rate scheduler with linear warmup.
- Supports logging to Weights & Biases (wandb).
- The trained model can be pushed to Hugging Face Hub.
- Example checkpoints mentioned in the paper: [DSM-150](https://huggingface.co/GleghornLab/DSM_150) (from ESM2-150M, 100k steps, batch 32, seqlen 512, LR 1e-4) and [DSM-650](https://huggingface.co/GleghornLab/DSM_650) (from ESM2-650M, 100k steps, global batch 128, seqlen 2048, LR 1e-4).
**Usage Example:**
```bash
python -m training.train_dsm \
--model_path facebook/esm2_t33_650M_UR50D \
--save_path GleghornLab/DSM_650 \
--lr 1e-4 \
--batch_size 8 \
--grad_accum 16 \
--max_steps 100000 \
--save_every 1000 \
--fp16 \
--wandb_project "DSM_Training" \
--token <your_hf_token_if_needed_for_private_repo_or_saving>
```
**Key Command-Line Arguments for `train_dsm.py`:**
* `--token`: Hugging Face token.
* `--model_path`: Path to the base ESM2 model to start from.
* `--save_path`: Path to save the trained DSM model on Hugging Face Hub.
* `--lr`: Learning rate.
* `--batch_size`: Batch size per device.
* `--grad_accum`: Gradient accumulation steps.
* `--max_steps`: Maximum training steps.
* `--wandb_project`: Wandb project name (default: `DSM`).
* `--max_length`: Maximum sequence length.
* `--save_every`: Save model and evaluate every N steps.
* `--fp16`: Enable mixed-precision training.
* `--bugfix`: Use small batch size and max length for debugging.
### Other Training Scripts (e.g., for DSM-ppi)
The `training/` directory may also contain scripts like `train_dsm_bind.py`.
- DSM-ppi (e.g., [DSM-150-ppi](https://huggingface.co/GleghornLab/DSM_150_ppi_lora), [DSM-650-ppi](https://huggingface.co/GleghornLab/DSM_650_ppi_lora)) is fine-tuned on PPI datasets.
- Training involves conditioning on a target sequence (SeqA) to generate an interactor (SeqB) using the format `[CLS]--SeqA--[EOS]--[MASKED~SeqB]--[EOS]`.
- LoRA (Low-Rank Adaptation) can be applied to attention layers for efficient fine-tuning.
And `training/iterable_trainer.py` provides the `get_iterable_trainer` function used by `train_dsm.py` to enable training with iterable datasets.
## Evaluation
The repository includes a comprehensive suite for evaluating model performance, focusing on:
1. **Sequence Reconstruction (Mask Filling):**
* Evaluated by masking validation/test sets at various corruption rates (5% to 90%) and measuring cross-entropy loss, weighted F1 score, and Alignment Score (ASc) for the masked positions.
* The script `evaluation/mask_filling.py` is central to this.
2. **Unconditional Generation Quality:**
* Generate a corpus of sequences based on lengths from a reference set (e.g., validation data).
* Compare distributions (1-mers, 2-mers, 3-mers) of amino acids and predicted secondary structures between generated and natural sequences using χ² test and Jensen-Shannon (JS) divergence.
* Compare distributions of predicted functional annotations (e.g., using Annotation Vocabulary - AV terms).
* Scripts involved: `evaluation/unconditional_generation_tuning.py` (to find optimal generation parameters like temperature and step divisor `s`), `evaluation/unconditional_generation.py`, `evaluation/ss_pred.py` (using [production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) or [production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model)), `evaluation/annotate_comparisons.py`, `evaluation/compare_distributions.py`, `evaluation/plot_distribution_comparisons.py`.
* The `run_eval_pipeline.py` script automates this workflow.
3. **Representation Quality (Model Probing):**
* Evaluate learned embeddings by training linear probes (or simple transformer blocks) on various downstream tasks (e.g., secondary structure prediction, localization prediction, etc.).
* Performance is compared against random vectors, randomized transformers, and other established pLMs.
* The assessment was done with [Protify](https://github.com/Synthyra/Protify), an open-source framework that can be used for pLM training and evaluation.
4. **Conditional Generation (Binder Design for DSM-ppi):**
* Evaluate DSM-ppi on benchmarks like BenchBB.
* Generate binders for target proteins using template-based masking strategies.
* Assess generated binders using *in-silico* tools like Synteract2 for predicted binding affinity (ppKd).
The `evaluation/` directory also contains a `readme.md` which provides further details on some evaluation workflows. Key metrics used include:
- **Alignment Score (ASc):** A normalized Needleman-Wunsch global alignment score (using BLOSUM62) to measure sequence similarity, robust to length variations. ASc(a, b) = l/(f(a, a) - f(a, b) + l).
- **Jensen-Shannon (JS) Divergence:** To compare distributions of k-mers and functional terms.
**Running the Full Unconditional Evaluation Pipeline:**
```bash
python run_eval_pipeline.py --token YOUR_HF_TOKEN --data_dir ./evaluation_results
```
Refer to `run_eval_pipeline.py --help` for more options, such as `--skip_tuning`.
### Mask Filling Evaluation
The script `evaluation/mask_filling.py` is used to evaluate models on their ability to predict masked tokens in a sequence across various masking rates.
- **Functionality:**
- Evaluates different models (DSM, DPLM, standard ESM models).
- Tests across multiple datasets ([Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50), [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090)).
- Calculates metrics: loss, perplexity, precision, recall, F1, accuracy, MCC, and alignment score.
- Saves detailed results to CSV files.
- Can generate a summary plot comparing model performance across different mask rates using `evaluation/plot_mask_fill_results.py`.
- **Usage Example:**
```bash
python -m evaluation.mask_filling \
--token YOUR_HF_TOKEN \
--batch_size 4 \
--mask_rates 0.15 0.30 0.50 \
--data_splits valid test \
--results_dir ./results/mask_fill_custom
```
To generate a comparison plot from existing results:
```bash
python -m evaluation.mask_filling --generate_comparison_plot --results_dir ./results/mask_fill_custom --plot_output ./results/mask_fill_custom/comparison.png
```
### Other Evaluation Scripts
The `evaluation/` directory contains additional scripts for more specific analyses. These are typically run independently:
- `evaluation/all_targets_uncond.py` and `evaluation/all_targets_cond.py`: Likely for evaluating generation towards specific targets, unconditionally and conditionally.
- `evaluation/conditional_binder.py` and `evaluation/unconditional_binder.py`: Suggest evaluation focused on generating protein binders.
- `evaluation/unconditional_by_length.py`: May evaluate unconditional generation focusing on sequence length distributions.
- `evaluation/utils.py`: Utility functions for evaluation scripts.
Users should refer to individual scripts (e.g., using `python -m evaluation.<script_name> --help`) for their specific usage and arguments.
The `evaluation/` directory also contains a `readme.md` which provides further details on the unconditional generation evaluation workflow.
## Results
DSM demonstrates strong performance in both protein sequence generation and representation learning, establishing masked diffusion as a powerful paradigm.
- **Biomimetic Sequence Generation**: Unconditionally generated DSM sequences closely mimic natural protein distributions in terms of amino acid k-mers, predicted secondary structures (JS divergence < 0.01 for AA k-mers), and predicted functional annotations (AV terms, JS divergence ~0.1). This suggests DSM captures underlying biological principles.
- **Superior Sequence Reconstruction**: DSM models significantly outperform MLM-based ESM2 models in reconstructing sequences from highly corrupted inputs (up to 90% masking).
- At 90% masking, DSM achieves an Alignment Score (ASc) of ~0.27, considerably higher than random.
- DSM models show higher F1 scores in reconstruction tasks compared to DPLM models, especially at high mask rates.
- **High-Quality Embeddings**: DSM embeddings match or exceed the quality of those from comparably sized pLMs (ESM2, DPLM) and even larger autoregressive models (ProtCLM 1B) on various downstream tasks evaluated by linear probing. [DSM-650](https://huggingface.co/GleghornLab/DSM_650) generally provides the best representations among tested models of similar size.
- **Effective Binder Design (DSM-ppi):**
- DSM-ppi fine-tuned on protein-protein interaction data, demonstrates the ability to generate protein binders conditioned on target sequences.
- On the BenchBB benchmark, DSM-generated binders (both unconditional DSM and conditional DSM-ppi) show promising predicted binding affinities, in some cases superior to known binders. For example, designs for EGFR showed high predicted pKd and good structural metrics (ipTM, pTM with AlphaFold3).
- **Efficiency**: DSM can generate realistic protein sequences from a single forward pass during reconstruction tasks at high mask rates, offering potential efficiency advantages over iterative AR or some discrete diffusion models.
These results highlight DSM's capability to unify high-quality protein representation learning and biologically coherent generative modeling within a single framework.
## Cite
```
@misc{hallee2025diffusionsequencemodelsenhanced,
title={Diffusion Sequence Models for Enhanced Protein Representation and Generation},
author={Logan Hallee and Nikolaos Rafailidis and David B. Bichara and Jason P. Gleghorn},
year={2025},
eprint={2506.08293},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2506.08293},
}
```
|
Synthyra/DSM_ppi_full | Synthyra | 2025-06-23T15:22:52Z | 202 | 0 | transformers | [
"transformers",
"pytorch",
"dsm",
"custom_code",
"arxiv:2506.08293",
"endpoints_compatible",
"region:us"
] | null | 2025-06-11T15:02:19Z | ---
library_name: transformers
tags: []
---
# DSM: Diffusion Models for Protein Sequence Generation
### Note: This readme is shared between our GitHub and Huggingface pages.
## Table of Contents
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [Demos](#usage)
- [Local installation](#installation)
- [Training](#training)
- [Evaluation](#evaluation)
- [Results](#results)
- [Cite](#cite)
## Introduction
DSM (Diffusion Sequence Model) is a novel Protein Language Model (pLM) developed in collaboration between the [Gleghorn Lab](https://www.gleghornlab.com/) and [Synthyra](https://synthyra.com/). It was trained with masked diffusion to enable both high-quality representation learning and generative protein design. This repository contains the code for training, evaluating, and applying DSM and its variants.
DSM is capable of generating diverse, biomimetic sequences that align with expected amino acid compositions, secondary structures, and predicted functions. Furthermore, DSM's learned representations match or exceed those of comparably sized pLMs on various downstream tasks. DSM is detailed extensively in our [preprint](https://arxiv.org/abs/2506.08293) (which is currently in review). Beyond the base and PPI variants, we are currently training versions to jointly diffuse over sequence and foldseek tokens, as well as [Annotation Vocabulary](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1) tokens. Since the preprint release, Synthyra has trained [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) which neglects the LoRA PPI training in favor for full finetuning. Additionally, the sequences SeqA and SeqB are jointly masked instead of just SeqB in the original version. We plan on adding the **many** new results to the second version of the preprint and eventual journal article.
## Models
Relevant Huggingface hosted models and datasets
- **Base DSM Models**:
- [GleghornLab/DSM_150](https://huggingface.co/GleghornLab/DSM_150) - 150M parameter DSM model
- [GleghornLab/DSM_650](https://huggingface.co/GleghornLab/DSM_650) - 650M parameter DSM model
- **DSM-ppi Models**:
(LoRA versions - results reported in paper but not recommended for real use)
- [GleghornLab/DSM_150_ppi_lora](https://huggingface.co/GleghornLab/DSM_150_ppi_lora) - 150M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_650_ppi_lora](https://huggingface.co/GleghornLab/DSM_650_ppi_lora) - 650M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_150_ppi_control](https://huggingface.co/GleghornLab/DSM_150_ppi_control) - Control version of LoRA DSM-ppi
(Fully finetuned - recommended for real use)
- [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) - 650M parameter DSM-ppi model
- **Datasets**:
- [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) - Open MetaGenomic dataset clustered at 50% identity (207M sequences)
- [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090) - STRING database model organisms (653k sequences)
- **Utility Models**:
- [GleghornLab/production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) - Secondary structure prediction (4-class)
- [GleghornLab/production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model) - Secondary structure prediction (9-class)
## Usage
This section outlines how to use a trained `DSM` model for common generation tasks. The core generation logic is provided by the `GenerateMixin` class, used by `DSM` models.
First, ensure you have a trained model (either one you trained or a pre-trained one from Hugging Face Hub) and the necessary environment set up.
```python
import torch
from models.modeling_dsm import DSM # Or DSM_ppi for binder generation
# Load a pre-trained model
model_name_or_path = "GleghornLab/DSM_650" # Replace with your model of choice
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DSM.from_pretrained(model_name_or_path).to(device).eval()
tokenizer = model.tokenizer
```
```console
You are using a model of type esm_diff to instantiate a model of type dsm. This is not supported for all configurations of models and can yield errors.
```
This warning is normal - all good!
### 1. Unconditional Sequence Generation
To generate a novel sequence of a specific length. DSM uses a progressive denoising approach.
```python
### Unconditional generation
length = 100
mask_token = tokenizer.mask_token
# optionally, enforce starting with methionine
input_tokens = tokenizer.encode('M' + ''.join([mask_token] * (length - 1)), add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MFRVDALQVAQQETLAIGRSTAYDKQESPSMAQRQVLTQLAAYGGENDLRQICIPAERRNFLSIANGASYQFVEEDNEANGGYWSPHKAGLPESACKRFI
```
### 2. Mask Filling (Inpainting)
To fill in masked regions of a template sequence:
```python
# Mask Filling / Inpainting
template_sequence = "MA<mask><mask><mask>KEG<mask><mask>STL"
input_tokens = tokenizer.encode(template_sequence, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MAVKFKEGGISTL
```
### 3. Conditional Generation (e.g., Binders - using DSM-ppi)
```python
# from models.modeling_dsm import DSM_ppi
# model_binder = DSM_ppi.from_pretrained("GleghornLab/DSM_650_ppi_lora").to(device).eval()
# The lora version from the paper leads to unreliable outputs
# Synthyra has generously trained a version through full fine tuning
model = DSM.from_pretrained("Synthyra/DSM_ppi_full").to(device).eval()
# BBF-14
target_seq = "MGTPLWALLGGPWRGTATYEDGTKVTLDYRYTRVSPDRLRADVTYTTPDGTTLEATVDLWKDANGVIRYHATYPDGTSADGTLTQLDADTLLATGTYDDGTKYTVTLTRVAPGSGWHHHHHH"
# For binder generation, the 'interactor' (SeqB) part is what gets generated/filled.
# Start with a fully masked interactor of desired length.
interactor_template_len = 256
interactor_template = ''.join([mask_token] * interactor_template_len)
combined_input_str = target_seq + '<eos>' + interactor_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
target, binder = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"Generated binder {binder[0]}")
```
```console
Generated binder HRHHHRRPTHARETEWLARMRLGIAEHQRIAVPRSDLEPDQMRERAADNQRLVKEYDQVIDHQTEGSTERLFEVLRVWEQVNTEQAHHEASAALEFGRVGYPDDEGGRAFYTQANAHKKDLVEYIGGIDEDAKWDPRIAWLMPEGGQPVKATVIGVSEERINGLKVLDDHWGRERRLWLINLFTALQAYDDPTRPTQVTLTPATDQLTNDVQYLLLSTRYTPPGVTTAVKIRKLDGRTLKVLTTEAPYVVRGATLS
```
Folded with Chai1:

`Synthyra/DSM_ppi_full` was actually trained to fill masks from any part of SeqA and SeqB. That means you can fully hallucinate plausibly interacting protein pairs.
```python
seq_a_length = 128
seq_b_length = 128
seq_a_template = ''.join([mask_token] * seq_a_length)
seq_b_template = ''.join([mask_token] * seq_b_length)
combined_input_str = seq_a_template + '<eos>' + seq_b_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=10, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
seqa, seqb = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"SeqA: {seqa[0][5:]}") # remove cls token
print(f"SeqB: {seqb[0]}")
```
```console
SeqA: MVNLAKMRQRTEQNLREVSSFVKILFHTVLKFPMKINIGIHVHINMQAAQNAAADQNMQATNVIDLHNFKMGKDIGVDNKASATAHIYDEAHHTFLQLGAIKLLHAIPMIAGPVRCRLPIGFGHRFRG
SeqB: HYKNPMHSLLDSNVLHKDVVEVRLPIKIGMELDVMASAMREFLMPGTQQGDLRVIAEKRPVNKLHTYRRDLVKLLLAGAKLGTEAKSVELDLYRTELGGLVVYIININIATWDIIFAKVKICRGNDKP
```
Folded with Chai1:

## Demos
There are various demos with many more to come. For example, in `demo_dsm_ppi_full.py` (run by `python -m demos.demo_dsm_ppi_full`) we perform a test on DSM-ppi.
We take 1000 protein pairs from BIOGRID (real protein-protein interactions) and 1000 from Negatome (non interacting protein pairs) and mask the second sequence (SeqB) by 50%.
This acts as a sanity check, as we expect the accuracy on reconstructing real positive PPIs to be higher than the accuracy on non-interacting proteins.
Indeed, this is the case:
```console
==================================================
RESULTS COMPARISON
==================================================
Positive examples:
Mean accuracy: 0.495 ± 0.322
Processed: 1000 examples
Negative examples:
Mean accuracy: 0.227 ± 0.231
Processed: 1000 examples
Difference (Positive - Negative): 0.267
T-test: t=21.331, p=0.000
Difference is statistically significant (p < 0.05)
```
## Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd <repository-name>
```
2. **Initialize the submodules:**
```bash
git submodule update --init --remote --recursive
```
3. **Set up the Python virtual environment:**
The `setup_bioenv.sh` script creates a virtual environment named `bioenv` in your home directory (`~/bioenv`), installs PyTorch with CUDA 12.6 support, and then installs all other dependencies from `requirements.txt`.
Make the script executable:
```bash
chmod +x setup_bioenv.sh
```
Run the script:
```bash
./setup_bioenv.sh
```
If you are not on a linux machine, you can install the requirements directly
```console
python -m pip install -r requirements.txt
```
4. **Activate the environment:**
Each time you want to work on this project, activate the virtual environment:
```bash
source ~/bioenv/bin/activate
```
5. **To deactivate the environment:**
```bash
deactivate
```
## Training
The primary script for training models is `training/train_dsm.py`. This script further pretrains an ESM2 checkpoint using the DSM objective (masked diffusion based on LLaDA) on a large protein sequence dataset like [OMG-prot50](https://huggingface.co/datasets/Synthyra/omg_prot50).
### Main Training Script: `train_dsm.py`
- **Base Model**: DSM models are extended from pre-trained ESM2 checkpoints (e.g., ESM2-150M, ESM2-650M).
- **Training Objective**: Masked diffusion loss, where the model predicts masked tokens. The loss is scaled by `1/(t + epsilon)` where `t` is the corruption level, penalizing errors more at low mask rates.
- **Language Modeling Head**: Uses a modified head with a soft-logit cap (`tau=30`) and tied output projection weights to the token embeddings.
- **Data Handling**:
- Training data can be streamed from datasets like [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) (a version of Open MetaGenomic dataset clustered at 50% identity).
- Uses `data.dataset_classes.SequenceDatasetFromList` for validation/test sets and `data.dataset_classes.IterableDatasetFromHF` for streaming training.
- `data.data_collators.SequenceCollator` is used for batching.
- **Training Process**:
- Utilizes Hugging Face `TrainingArguments`.
- A custom `IterableTrainer` (from `training.iterable_trainer.py`) handles iterable datasets.
- Uses AdamW optimizer and a cosine learning rate scheduler with linear warmup.
- Supports logging to Weights & Biases (wandb).
- The trained model can be pushed to Hugging Face Hub.
- Example checkpoints mentioned in the paper: [DSM-150](https://huggingface.co/GleghornLab/DSM_150) (from ESM2-150M, 100k steps, batch 32, seqlen 512, LR 1e-4) and [DSM-650](https://huggingface.co/GleghornLab/DSM_650) (from ESM2-650M, 100k steps, global batch 128, seqlen 2048, LR 1e-4).
**Usage Example:**
```bash
python -m training.train_dsm \
--model_path facebook/esm2_t33_650M_UR50D \
--save_path GleghornLab/DSM_650 \
--lr 1e-4 \
--batch_size 8 \
--grad_accum 16 \
--max_steps 100000 \
--save_every 1000 \
--fp16 \
--wandb_project "DSM_Training" \
--token <your_hf_token_if_needed_for_private_repo_or_saving>
```
**Key Command-Line Arguments for `train_dsm.py`:**
* `--token`: Hugging Face token.
* `--model_path`: Path to the base ESM2 model to start from.
* `--save_path`: Path to save the trained DSM model on Hugging Face Hub.
* `--lr`: Learning rate.
* `--batch_size`: Batch size per device.
* `--grad_accum`: Gradient accumulation steps.
* `--max_steps`: Maximum training steps.
* `--wandb_project`: Wandb project name (default: `DSM`).
* `--max_length`: Maximum sequence length.
* `--save_every`: Save model and evaluate every N steps.
* `--fp16`: Enable mixed-precision training.
* `--bugfix`: Use small batch size and max length for debugging.
### Other Training Scripts (e.g., for DSM-ppi)
The `training/` directory may also contain scripts like `train_dsm_bind.py`.
- DSM-ppi (e.g., [DSM-150-ppi](https://huggingface.co/GleghornLab/DSM_150_ppi_lora), [DSM-650-ppi](https://huggingface.co/GleghornLab/DSM_650_ppi_lora)) is fine-tuned on PPI datasets.
- Training involves conditioning on a target sequence (SeqA) to generate an interactor (SeqB) using the format `[CLS]--SeqA--[EOS]--[MASKED~SeqB]--[EOS]`.
- LoRA (Low-Rank Adaptation) can be applied to attention layers for efficient fine-tuning.
And `training/iterable_trainer.py` provides the `get_iterable_trainer` function used by `train_dsm.py` to enable training with iterable datasets.
## Evaluation
The repository includes a comprehensive suite for evaluating model performance, focusing on:
1. **Sequence Reconstruction (Mask Filling):**
* Evaluated by masking validation/test sets at various corruption rates (5% to 90%) and measuring cross-entropy loss, weighted F1 score, and Alignment Score (ASc) for the masked positions.
* The script `evaluation/mask_filling.py` is central to this.
2. **Unconditional Generation Quality:**
* Generate a corpus of sequences based on lengths from a reference set (e.g., validation data).
* Compare distributions (1-mers, 2-mers, 3-mers) of amino acids and predicted secondary structures between generated and natural sequences using χ² test and Jensen-Shannon (JS) divergence.
* Compare distributions of predicted functional annotations (e.g., using Annotation Vocabulary - AV terms).
* Scripts involved: `evaluation/unconditional_generation_tuning.py` (to find optimal generation parameters like temperature and step divisor `s`), `evaluation/unconditional_generation.py`, `evaluation/ss_pred.py` (using [production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) or [production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model)), `evaluation/annotate_comparisons.py`, `evaluation/compare_distributions.py`, `evaluation/plot_distribution_comparisons.py`.
* The `run_eval_pipeline.py` script automates this workflow.
3. **Representation Quality (Model Probing):**
* Evaluate learned embeddings by training linear probes (or simple transformer blocks) on various downstream tasks (e.g., secondary structure prediction, localization prediction, etc.).
* Performance is compared against random vectors, randomized transformers, and other established pLMs.
* The assessment was done with [Protify](https://github.com/Synthyra/Protify), an open-source framework that can be used for pLM training and evaluation.
4. **Conditional Generation (Binder Design for DSM-ppi):**
* Evaluate DSM-ppi on benchmarks like BenchBB.
* Generate binders for target proteins using template-based masking strategies.
* Assess generated binders using *in-silico* tools like Synteract2 for predicted binding affinity (ppKd).
The `evaluation/` directory also contains a `readme.md` which provides further details on some evaluation workflows. Key metrics used include:
- **Alignment Score (ASc):** A normalized Needleman-Wunsch global alignment score (using BLOSUM62) to measure sequence similarity, robust to length variations. ASc(a, b) = l/(f(a, a) - f(a, b) + l).
- **Jensen-Shannon (JS) Divergence:** To compare distributions of k-mers and functional terms.
**Running the Full Unconditional Evaluation Pipeline:**
```bash
python run_eval_pipeline.py --token YOUR_HF_TOKEN --data_dir ./evaluation_results
```
Refer to `run_eval_pipeline.py --help` for more options, such as `--skip_tuning`.
### Mask Filling Evaluation
The script `evaluation/mask_filling.py` is used to evaluate models on their ability to predict masked tokens in a sequence across various masking rates.
- **Functionality:**
- Evaluates different models (DSM, DPLM, standard ESM models).
- Tests across multiple datasets ([Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50), [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090)).
- Calculates metrics: loss, perplexity, precision, recall, F1, accuracy, MCC, and alignment score.
- Saves detailed results to CSV files.
- Can generate a summary plot comparing model performance across different mask rates using `evaluation/plot_mask_fill_results.py`.
- **Usage Example:**
```bash
python -m evaluation.mask_filling \
--token YOUR_HF_TOKEN \
--batch_size 4 \
--mask_rates 0.15 0.30 0.50 \
--data_splits valid test \
--results_dir ./results/mask_fill_custom
```
To generate a comparison plot from existing results:
```bash
python -m evaluation.mask_filling --generate_comparison_plot --results_dir ./results/mask_fill_custom --plot_output ./results/mask_fill_custom/comparison.png
```
### Other Evaluation Scripts
The `evaluation/` directory contains additional scripts for more specific analyses. These are typically run independently:
- `evaluation/all_targets_uncond.py` and `evaluation/all_targets_cond.py`: Likely for evaluating generation towards specific targets, unconditionally and conditionally.
- `evaluation/conditional_binder.py` and `evaluation/unconditional_binder.py`: Suggest evaluation focused on generating protein binders.
- `evaluation/unconditional_by_length.py`: May evaluate unconditional generation focusing on sequence length distributions.
- `evaluation/utils.py`: Utility functions for evaluation scripts.
Users should refer to individual scripts (e.g., using `python -m evaluation.<script_name> --help`) for their specific usage and arguments.
The `evaluation/` directory also contains a `readme.md` which provides further details on the unconditional generation evaluation workflow.
## Results
DSM demonstrates strong performance in both protein sequence generation and representation learning, establishing masked diffusion as a powerful paradigm.
- **Biomimetic Sequence Generation**: Unconditionally generated DSM sequences closely mimic natural protein distributions in terms of amino acid k-mers, predicted secondary structures (JS divergence < 0.01 for AA k-mers), and predicted functional annotations (AV terms, JS divergence ~0.1). This suggests DSM captures underlying biological principles.
- **Superior Sequence Reconstruction**: DSM models significantly outperform MLM-based ESM2 models in reconstructing sequences from highly corrupted inputs (up to 90% masking).
- At 90% masking, DSM achieves an Alignment Score (ASc) of ~0.27, considerably higher than random.
- DSM models show higher F1 scores in reconstruction tasks compared to DPLM models, especially at high mask rates.
- **High-Quality Embeddings**: DSM embeddings match or exceed the quality of those from comparably sized pLMs (ESM2, DPLM) and even larger autoregressive models (ProtCLM 1B) on various downstream tasks evaluated by linear probing. [DSM-650](https://huggingface.co/GleghornLab/DSM_650) generally provides the best representations among tested models of similar size.
- **Effective Binder Design (DSM-ppi):**
- DSM-ppi fine-tuned on protein-protein interaction data, demonstrates the ability to generate protein binders conditioned on target sequences.
- On the BenchBB benchmark, DSM-generated binders (both unconditional DSM and conditional DSM-ppi) show promising predicted binding affinities, in some cases superior to known binders. For example, designs for EGFR showed high predicted pKd and good structural metrics (ipTM, pTM with AlphaFold3).
- **Efficiency**: DSM can generate realistic protein sequences from a single forward pass during reconstruction tasks at high mask rates, offering potential efficiency advantages over iterative AR or some discrete diffusion models.
These results highlight DSM's capability to unify high-quality protein representation learning and biologically coherent generative modeling within a single framework.
## Cite
```
@misc{hallee2025diffusionsequencemodelsenhanced,
title={Diffusion Sequence Models for Enhanced Protein Representation and Generation},
author={Logan Hallee and Nikolaos Rafailidis and David B. Bichara and Jason P. Gleghorn},
year={2025},
eprint={2506.08293},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2506.08293},
}
```
|
GleghornLab/DSM_650_ppi_lora | GleghornLab | 2025-06-23T15:22:51Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"esm_diff",
"custom_code",
"arxiv:2506.08293",
"endpoints_compatible",
"region:us"
] | null | 2025-05-08T19:10:47Z | ---
library_name: transformers
tags: []
---
# DSM: Diffusion Models for Protein Sequence Generation
### Note: This readme is shared between our GitHub and Huggingface pages.
## Table of Contents
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [Demos](#usage)
- [Local installation](#installation)
- [Training](#training)
- [Evaluation](#evaluation)
- [Results](#results)
- [Cite](#cite)
## Introduction
DSM (Diffusion Sequence Model) is a novel Protein Language Model (pLM) developed in collaboration between the [Gleghorn Lab](https://www.gleghornlab.com/) and [Synthyra](https://synthyra.com/). It was trained with masked diffusion to enable both high-quality representation learning and generative protein design. This repository contains the code for training, evaluating, and applying DSM and its variants.
DSM is capable of generating diverse, biomimetic sequences that align with expected amino acid compositions, secondary structures, and predicted functions. Furthermore, DSM's learned representations match or exceed those of comparably sized pLMs on various downstream tasks. DSM is detailed extensively in our [preprint](https://arxiv.org/abs/2506.08293) (which is currently in review). Beyond the base and PPI variants, we are currently training versions to jointly diffuse over sequence and foldseek tokens, as well as [Annotation Vocabulary](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1) tokens. Since the preprint release, Synthyra has trained [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) which neglects the LoRA PPI training in favor for full finetuning. Additionally, the sequences SeqA and SeqB are jointly masked instead of just SeqB in the original version. We plan on adding the **many** new results to the second version of the preprint and eventual journal article.
## Models
Relevant Huggingface hosted models and datasets
- **Base DSM Models**:
- [GleghornLab/DSM_150](https://huggingface.co/GleghornLab/DSM_150) - 150M parameter DSM model
- [GleghornLab/DSM_650](https://huggingface.co/GleghornLab/DSM_650) - 650M parameter DSM model
- **DSM-ppi Models**:
(LoRA versions - results reported in paper but not recommended for real use)
- [GleghornLab/DSM_150_ppi_lora](https://huggingface.co/GleghornLab/DSM_150_ppi_lora) - 150M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_650_ppi_lora](https://huggingface.co/GleghornLab/DSM_650_ppi_lora) - 650M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_150_ppi_control](https://huggingface.co/GleghornLab/DSM_150_ppi_control) - Control version of LoRA DSM-ppi
(Fully finetuned - recommended for real use)
- [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) - 650M parameter DSM-ppi model
- **Datasets**:
- [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) - Open MetaGenomic dataset clustered at 50% identity (207M sequences)
- [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090) - STRING database model organisms (653k sequences)
- **Utility Models**:
- [GleghornLab/production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) - Secondary structure prediction (4-class)
- [GleghornLab/production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model) - Secondary structure prediction (9-class)
## Usage
This section outlines how to use a trained `DSM` model for common generation tasks. The core generation logic is provided by the `GenerateMixin` class, used by `DSM` models.
First, ensure you have a trained model (either one you trained or a pre-trained one from Hugging Face Hub) and the necessary environment set up.
```python
import torch
from models.modeling_dsm import DSM # Or DSM_ppi for binder generation
# Load a pre-trained model
model_name_or_path = "GleghornLab/DSM_650" # Replace with your model of choice
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DSM.from_pretrained(model_name_or_path).to(device).eval()
tokenizer = model.tokenizer
```
```console
You are using a model of type esm_diff to instantiate a model of type dsm. This is not supported for all configurations of models and can yield errors.
```
This warning is normal - all good!
### 1. Unconditional Sequence Generation
To generate a novel sequence of a specific length. DSM uses a progressive denoising approach.
```python
### Unconditional generation
length = 100
mask_token = tokenizer.mask_token
# optionally, enforce starting with methionine
input_tokens = tokenizer.encode('M' + ''.join([mask_token] * (length - 1)), add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MFRVDALQVAQQETLAIGRSTAYDKQESPSMAQRQVLTQLAAYGGENDLRQICIPAERRNFLSIANGASYQFVEEDNEANGGYWSPHKAGLPESACKRFI
```
### 2. Mask Filling (Inpainting)
To fill in masked regions of a template sequence:
```python
# Mask Filling / Inpainting
template_sequence = "MA<mask><mask><mask>KEG<mask><mask>STL"
input_tokens = tokenizer.encode(template_sequence, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MAVKFKEGGISTL
```
### 3. Conditional Generation (e.g., Binders - using DSM-ppi)
```python
# from models.modeling_dsm import DSM_ppi
# model_binder = DSM_ppi.from_pretrained("GleghornLab/DSM_650_ppi_lora").to(device).eval()
# The lora version from the paper leads to unreliable outputs
# Synthyra has generously trained a version through full fine tuning
model = DSM.from_pretrained("Synthyra/DSM_ppi_full").to(device).eval()
# BBF-14
target_seq = "MGTPLWALLGGPWRGTATYEDGTKVTLDYRYTRVSPDRLRADVTYTTPDGTTLEATVDLWKDANGVIRYHATYPDGTSADGTLTQLDADTLLATGTYDDGTKYTVTLTRVAPGSGWHHHHHH"
# For binder generation, the 'interactor' (SeqB) part is what gets generated/filled.
# Start with a fully masked interactor of desired length.
interactor_template_len = 256
interactor_template = ''.join([mask_token] * interactor_template_len)
combined_input_str = target_seq + '<eos>' + interactor_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
target, binder = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"Generated binder {binder[0]}")
```
```console
Generated binder HRHHHRRPTHARETEWLARMRLGIAEHQRIAVPRSDLEPDQMRERAADNQRLVKEYDQVIDHQTEGSTERLFEVLRVWEQVNTEQAHHEASAALEFGRVGYPDDEGGRAFYTQANAHKKDLVEYIGGIDEDAKWDPRIAWLMPEGGQPVKATVIGVSEERINGLKVLDDHWGRERRLWLINLFTALQAYDDPTRPTQVTLTPATDQLTNDVQYLLLSTRYTPPGVTTAVKIRKLDGRTLKVLTTEAPYVVRGATLS
```
Folded with Chai1:

`Synthyra/DSM_ppi_full` was actually trained to fill masks from any part of SeqA and SeqB. That means you can fully hallucinate plausibly interacting protein pairs.
```python
seq_a_length = 128
seq_b_length = 128
seq_a_template = ''.join([mask_token] * seq_a_length)
seq_b_template = ''.join([mask_token] * seq_b_length)
combined_input_str = seq_a_template + '<eos>' + seq_b_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=10, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
seqa, seqb = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"SeqA: {seqa[0][5:]}") # remove cls token
print(f"SeqB: {seqb[0]}")
```
```console
SeqA: MVNLAKMRQRTEQNLREVSSFVKILFHTVLKFPMKINIGIHVHINMQAAQNAAADQNMQATNVIDLHNFKMGKDIGVDNKASATAHIYDEAHHTFLQLGAIKLLHAIPMIAGPVRCRLPIGFGHRFRG
SeqB: HYKNPMHSLLDSNVLHKDVVEVRLPIKIGMELDVMASAMREFLMPGTQQGDLRVIAEKRPVNKLHTYRRDLVKLLLAGAKLGTEAKSVELDLYRTELGGLVVYIININIATWDIIFAKVKICRGNDKP
```
Folded with Chai1:

## Demos
There are various demos with many more to come. For example, in `demo_dsm_ppi_full.py` (run by `python -m demos.demo_dsm_ppi_full`) we perform a test on DSM-ppi.
We take 1000 protein pairs from BIOGRID (real protein-protein interactions) and 1000 from Negatome (non interacting protein pairs) and mask the second sequence (SeqB) by 50%.
This acts as a sanity check, as we expect the accuracy on reconstructing real positive PPIs to be higher than the accuracy on non-interacting proteins.
Indeed, this is the case:
```console
==================================================
RESULTS COMPARISON
==================================================
Positive examples:
Mean accuracy: 0.495 ± 0.322
Processed: 1000 examples
Negative examples:
Mean accuracy: 0.227 ± 0.231
Processed: 1000 examples
Difference (Positive - Negative): 0.267
T-test: t=21.331, p=0.000
Difference is statistically significant (p < 0.05)
```
## Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd <repository-name>
```
2. **Initialize the submodules:**
```bash
git submodule update --init --remote --recursive
```
3. **Set up the Python virtual environment:**
The `setup_bioenv.sh` script creates a virtual environment named `bioenv` in your home directory (`~/bioenv`), installs PyTorch with CUDA 12.6 support, and then installs all other dependencies from `requirements.txt`.
Make the script executable:
```bash
chmod +x setup_bioenv.sh
```
Run the script:
```bash
./setup_bioenv.sh
```
If you are not on a linux machine, you can install the requirements directly
```console
python -m pip install -r requirements.txt
```
4. **Activate the environment:**
Each time you want to work on this project, activate the virtual environment:
```bash
source ~/bioenv/bin/activate
```
5. **To deactivate the environment:**
```bash
deactivate
```
## Training
The primary script for training models is `training/train_dsm.py`. This script further pretrains an ESM2 checkpoint using the DSM objective (masked diffusion based on LLaDA) on a large protein sequence dataset like [OMG-prot50](https://huggingface.co/datasets/Synthyra/omg_prot50).
### Main Training Script: `train_dsm.py`
- **Base Model**: DSM models are extended from pre-trained ESM2 checkpoints (e.g., ESM2-150M, ESM2-650M).
- **Training Objective**: Masked diffusion loss, where the model predicts masked tokens. The loss is scaled by `1/(t + epsilon)` where `t` is the corruption level, penalizing errors more at low mask rates.
- **Language Modeling Head**: Uses a modified head with a soft-logit cap (`tau=30`) and tied output projection weights to the token embeddings.
- **Data Handling**:
- Training data can be streamed from datasets like [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) (a version of Open MetaGenomic dataset clustered at 50% identity).
- Uses `data.dataset_classes.SequenceDatasetFromList` for validation/test sets and `data.dataset_classes.IterableDatasetFromHF` for streaming training.
- `data.data_collators.SequenceCollator` is used for batching.
- **Training Process**:
- Utilizes Hugging Face `TrainingArguments`.
- A custom `IterableTrainer` (from `training.iterable_trainer.py`) handles iterable datasets.
- Uses AdamW optimizer and a cosine learning rate scheduler with linear warmup.
- Supports logging to Weights & Biases (wandb).
- The trained model can be pushed to Hugging Face Hub.
- Example checkpoints mentioned in the paper: [DSM-150](https://huggingface.co/GleghornLab/DSM_150) (from ESM2-150M, 100k steps, batch 32, seqlen 512, LR 1e-4) and [DSM-650](https://huggingface.co/GleghornLab/DSM_650) (from ESM2-650M, 100k steps, global batch 128, seqlen 2048, LR 1e-4).
**Usage Example:**
```bash
python -m training.train_dsm \
--model_path facebook/esm2_t33_650M_UR50D \
--save_path GleghornLab/DSM_650 \
--lr 1e-4 \
--batch_size 8 \
--grad_accum 16 \
--max_steps 100000 \
--save_every 1000 \
--fp16 \
--wandb_project "DSM_Training" \
--token <your_hf_token_if_needed_for_private_repo_or_saving>
```
**Key Command-Line Arguments for `train_dsm.py`:**
* `--token`: Hugging Face token.
* `--model_path`: Path to the base ESM2 model to start from.
* `--save_path`: Path to save the trained DSM model on Hugging Face Hub.
* `--lr`: Learning rate.
* `--batch_size`: Batch size per device.
* `--grad_accum`: Gradient accumulation steps.
* `--max_steps`: Maximum training steps.
* `--wandb_project`: Wandb project name (default: `DSM`).
* `--max_length`: Maximum sequence length.
* `--save_every`: Save model and evaluate every N steps.
* `--fp16`: Enable mixed-precision training.
* `--bugfix`: Use small batch size and max length for debugging.
### Other Training Scripts (e.g., for DSM-ppi)
The `training/` directory may also contain scripts like `train_dsm_bind.py`.
- DSM-ppi (e.g., [DSM-150-ppi](https://huggingface.co/GleghornLab/DSM_150_ppi_lora), [DSM-650-ppi](https://huggingface.co/GleghornLab/DSM_650_ppi_lora)) is fine-tuned on PPI datasets.
- Training involves conditioning on a target sequence (SeqA) to generate an interactor (SeqB) using the format `[CLS]--SeqA--[EOS]--[MASKED~SeqB]--[EOS]`.
- LoRA (Low-Rank Adaptation) can be applied to attention layers for efficient fine-tuning.
And `training/iterable_trainer.py` provides the `get_iterable_trainer` function used by `train_dsm.py` to enable training with iterable datasets.
## Evaluation
The repository includes a comprehensive suite for evaluating model performance, focusing on:
1. **Sequence Reconstruction (Mask Filling):**
* Evaluated by masking validation/test sets at various corruption rates (5% to 90%) and measuring cross-entropy loss, weighted F1 score, and Alignment Score (ASc) for the masked positions.
* The script `evaluation/mask_filling.py` is central to this.
2. **Unconditional Generation Quality:**
* Generate a corpus of sequences based on lengths from a reference set (e.g., validation data).
* Compare distributions (1-mers, 2-mers, 3-mers) of amino acids and predicted secondary structures between generated and natural sequences using χ² test and Jensen-Shannon (JS) divergence.
* Compare distributions of predicted functional annotations (e.g., using Annotation Vocabulary - AV terms).
* Scripts involved: `evaluation/unconditional_generation_tuning.py` (to find optimal generation parameters like temperature and step divisor `s`), `evaluation/unconditional_generation.py`, `evaluation/ss_pred.py` (using [production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) or [production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model)), `evaluation/annotate_comparisons.py`, `evaluation/compare_distributions.py`, `evaluation/plot_distribution_comparisons.py`.
* The `run_eval_pipeline.py` script automates this workflow.
3. **Representation Quality (Model Probing):**
* Evaluate learned embeddings by training linear probes (or simple transformer blocks) on various downstream tasks (e.g., secondary structure prediction, localization prediction, etc.).
* Performance is compared against random vectors, randomized transformers, and other established pLMs.
* The assessment was done with [Protify](https://github.com/Synthyra/Protify), an open-source framework that can be used for pLM training and evaluation.
4. **Conditional Generation (Binder Design for DSM-ppi):**
* Evaluate DSM-ppi on benchmarks like BenchBB.
* Generate binders for target proteins using template-based masking strategies.
* Assess generated binders using *in-silico* tools like Synteract2 for predicted binding affinity (ppKd).
The `evaluation/` directory also contains a `readme.md` which provides further details on some evaluation workflows. Key metrics used include:
- **Alignment Score (ASc):** A normalized Needleman-Wunsch global alignment score (using BLOSUM62) to measure sequence similarity, robust to length variations. ASc(a, b) = l/(f(a, a) - f(a, b) + l).
- **Jensen-Shannon (JS) Divergence:** To compare distributions of k-mers and functional terms.
**Running the Full Unconditional Evaluation Pipeline:**
```bash
python run_eval_pipeline.py --token YOUR_HF_TOKEN --data_dir ./evaluation_results
```
Refer to `run_eval_pipeline.py --help` for more options, such as `--skip_tuning`.
### Mask Filling Evaluation
The script `evaluation/mask_filling.py` is used to evaluate models on their ability to predict masked tokens in a sequence across various masking rates.
- **Functionality:**
- Evaluates different models (DSM, DPLM, standard ESM models).
- Tests across multiple datasets ([Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50), [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090)).
- Calculates metrics: loss, perplexity, precision, recall, F1, accuracy, MCC, and alignment score.
- Saves detailed results to CSV files.
- Can generate a summary plot comparing model performance across different mask rates using `evaluation/plot_mask_fill_results.py`.
- **Usage Example:**
```bash
python -m evaluation.mask_filling \
--token YOUR_HF_TOKEN \
--batch_size 4 \
--mask_rates 0.15 0.30 0.50 \
--data_splits valid test \
--results_dir ./results/mask_fill_custom
```
To generate a comparison plot from existing results:
```bash
python -m evaluation.mask_filling --generate_comparison_plot --results_dir ./results/mask_fill_custom --plot_output ./results/mask_fill_custom/comparison.png
```
### Other Evaluation Scripts
The `evaluation/` directory contains additional scripts for more specific analyses. These are typically run independently:
- `evaluation/all_targets_uncond.py` and `evaluation/all_targets_cond.py`: Likely for evaluating generation towards specific targets, unconditionally and conditionally.
- `evaluation/conditional_binder.py` and `evaluation/unconditional_binder.py`: Suggest evaluation focused on generating protein binders.
- `evaluation/unconditional_by_length.py`: May evaluate unconditional generation focusing on sequence length distributions.
- `evaluation/utils.py`: Utility functions for evaluation scripts.
Users should refer to individual scripts (e.g., using `python -m evaluation.<script_name> --help`) for their specific usage and arguments.
The `evaluation/` directory also contains a `readme.md` which provides further details on the unconditional generation evaluation workflow.
## Results
DSM demonstrates strong performance in both protein sequence generation and representation learning, establishing masked diffusion as a powerful paradigm.
- **Biomimetic Sequence Generation**: Unconditionally generated DSM sequences closely mimic natural protein distributions in terms of amino acid k-mers, predicted secondary structures (JS divergence < 0.01 for AA k-mers), and predicted functional annotations (AV terms, JS divergence ~0.1). This suggests DSM captures underlying biological principles.
- **Superior Sequence Reconstruction**: DSM models significantly outperform MLM-based ESM2 models in reconstructing sequences from highly corrupted inputs (up to 90% masking).
- At 90% masking, DSM achieves an Alignment Score (ASc) of ~0.27, considerably higher than random.
- DSM models show higher F1 scores in reconstruction tasks compared to DPLM models, especially at high mask rates.
- **High-Quality Embeddings**: DSM embeddings match or exceed the quality of those from comparably sized pLMs (ESM2, DPLM) and even larger autoregressive models (ProtCLM 1B) on various downstream tasks evaluated by linear probing. [DSM-650](https://huggingface.co/GleghornLab/DSM_650) generally provides the best representations among tested models of similar size.
- **Effective Binder Design (DSM-ppi):**
- DSM-ppi fine-tuned on protein-protein interaction data, demonstrates the ability to generate protein binders conditioned on target sequences.
- On the BenchBB benchmark, DSM-generated binders (both unconditional DSM and conditional DSM-ppi) show promising predicted binding affinities, in some cases superior to known binders. For example, designs for EGFR showed high predicted pKd and good structural metrics (ipTM, pTM with AlphaFold3).
- **Efficiency**: DSM can generate realistic protein sequences from a single forward pass during reconstruction tasks at high mask rates, offering potential efficiency advantages over iterative AR or some discrete diffusion models.
These results highlight DSM's capability to unify high-quality protein representation learning and biologically coherent generative modeling within a single framework.
## Cite
```
@misc{hallee2025diffusionsequencemodelsenhanced,
title={Diffusion Sequence Models for Enhanced Protein Representation and Generation},
author={Logan Hallee and Nikolaos Rafailidis and David B. Bichara and Jason P. Gleghorn},
year={2025},
eprint={2506.08293},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2506.08293},
}
```
|
GleghornLab/DSM_150_ppi_lora | GleghornLab | 2025-06-23T15:22:50Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"esm_diff",
"custom_code",
"arxiv:2506.08293",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T06:55:32Z | ---
library_name: transformers
tags: []
---
# DSM: Diffusion Models for Protein Sequence Generation
### Note: This readme is shared between our GitHub and Huggingface pages.
## Table of Contents
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [Demos](#usage)
- [Local installation](#installation)
- [Training](#training)
- [Evaluation](#evaluation)
- [Results](#results)
- [Cite](#cite)
## Introduction
DSM (Diffusion Sequence Model) is a novel Protein Language Model (pLM) developed in collaboration between the [Gleghorn Lab](https://www.gleghornlab.com/) and [Synthyra](https://synthyra.com/). It was trained with masked diffusion to enable both high-quality representation learning and generative protein design. This repository contains the code for training, evaluating, and applying DSM and its variants.
DSM is capable of generating diverse, biomimetic sequences that align with expected amino acid compositions, secondary structures, and predicted functions. Furthermore, DSM's learned representations match or exceed those of comparably sized pLMs on various downstream tasks. DSM is detailed extensively in our [preprint](https://arxiv.org/abs/2506.08293) (which is currently in review). Beyond the base and PPI variants, we are currently training versions to jointly diffuse over sequence and foldseek tokens, as well as [Annotation Vocabulary](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1) tokens. Since the preprint release, Synthyra has trained [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) which neglects the LoRA PPI training in favor for full finetuning. Additionally, the sequences SeqA and SeqB are jointly masked instead of just SeqB in the original version. We plan on adding the **many** new results to the second version of the preprint and eventual journal article.
## Models
Relevant Huggingface hosted models and datasets
- **Base DSM Models**:
- [GleghornLab/DSM_150](https://huggingface.co/GleghornLab/DSM_150) - 150M parameter DSM model
- [GleghornLab/DSM_650](https://huggingface.co/GleghornLab/DSM_650) - 650M parameter DSM model
- **DSM-ppi Models**:
(LoRA versions - results reported in paper but not recommended for real use)
- [GleghornLab/DSM_150_ppi_lora](https://huggingface.co/GleghornLab/DSM_150_ppi_lora) - 150M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_650_ppi_lora](https://huggingface.co/GleghornLab/DSM_650_ppi_lora) - 650M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_150_ppi_control](https://huggingface.co/GleghornLab/DSM_150_ppi_control) - Control version of LoRA DSM-ppi
(Fully finetuned - recommended for real use)
- [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) - 650M parameter DSM-ppi model
- **Datasets**:
- [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) - Open MetaGenomic dataset clustered at 50% identity (207M sequences)
- [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090) - STRING database model organisms (653k sequences)
- **Utility Models**:
- [GleghornLab/production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) - Secondary structure prediction (4-class)
- [GleghornLab/production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model) - Secondary structure prediction (9-class)
## Usage
This section outlines how to use a trained `DSM` model for common generation tasks. The core generation logic is provided by the `GenerateMixin` class, used by `DSM` models.
First, ensure you have a trained model (either one you trained or a pre-trained one from Hugging Face Hub) and the necessary environment set up.
```python
import torch
from models.modeling_dsm import DSM # Or DSM_ppi for binder generation
# Load a pre-trained model
model_name_or_path = "GleghornLab/DSM_650" # Replace with your model of choice
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DSM.from_pretrained(model_name_or_path).to(device).eval()
tokenizer = model.tokenizer
```
```console
You are using a model of type esm_diff to instantiate a model of type dsm. This is not supported for all configurations of models and can yield errors.
```
This warning is normal - all good!
### 1. Unconditional Sequence Generation
To generate a novel sequence of a specific length. DSM uses a progressive denoising approach.
```python
### Unconditional generation
length = 100
mask_token = tokenizer.mask_token
# optionally, enforce starting with methionine
input_tokens = tokenizer.encode('M' + ''.join([mask_token] * (length - 1)), add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MFRVDALQVAQQETLAIGRSTAYDKQESPSMAQRQVLTQLAAYGGENDLRQICIPAERRNFLSIANGASYQFVEEDNEANGGYWSPHKAGLPESACKRFI
```
### 2. Mask Filling (Inpainting)
To fill in masked regions of a template sequence:
```python
# Mask Filling / Inpainting
template_sequence = "MA<mask><mask><mask>KEG<mask><mask>STL"
input_tokens = tokenizer.encode(template_sequence, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MAVKFKEGGISTL
```
### 3. Conditional Generation (e.g., Binders - using DSM-ppi)
```python
# from models.modeling_dsm import DSM_ppi
# model_binder = DSM_ppi.from_pretrained("GleghornLab/DSM_650_ppi_lora").to(device).eval()
# The lora version from the paper leads to unreliable outputs
# Synthyra has generously trained a version through full fine tuning
model = DSM.from_pretrained("Synthyra/DSM_ppi_full").to(device).eval()
# BBF-14
target_seq = "MGTPLWALLGGPWRGTATYEDGTKVTLDYRYTRVSPDRLRADVTYTTPDGTTLEATVDLWKDANGVIRYHATYPDGTSADGTLTQLDADTLLATGTYDDGTKYTVTLTRVAPGSGWHHHHHH"
# For binder generation, the 'interactor' (SeqB) part is what gets generated/filled.
# Start with a fully masked interactor of desired length.
interactor_template_len = 256
interactor_template = ''.join([mask_token] * interactor_template_len)
combined_input_str = target_seq + '<eos>' + interactor_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
target, binder = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"Generated binder {binder[0]}")
```
```console
Generated binder HRHHHRRPTHARETEWLARMRLGIAEHQRIAVPRSDLEPDQMRERAADNQRLVKEYDQVIDHQTEGSTERLFEVLRVWEQVNTEQAHHEASAALEFGRVGYPDDEGGRAFYTQANAHKKDLVEYIGGIDEDAKWDPRIAWLMPEGGQPVKATVIGVSEERINGLKVLDDHWGRERRLWLINLFTALQAYDDPTRPTQVTLTPATDQLTNDVQYLLLSTRYTPPGVTTAVKIRKLDGRTLKVLTTEAPYVVRGATLS
```
Folded with Chai1:

`Synthyra/DSM_ppi_full` was actually trained to fill masks from any part of SeqA and SeqB. That means you can fully hallucinate plausibly interacting protein pairs.
```python
seq_a_length = 128
seq_b_length = 128
seq_a_template = ''.join([mask_token] * seq_a_length)
seq_b_template = ''.join([mask_token] * seq_b_length)
combined_input_str = seq_a_template + '<eos>' + seq_b_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=10, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
seqa, seqb = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"SeqA: {seqa[0][5:]}") # remove cls token
print(f"SeqB: {seqb[0]}")
```
```console
SeqA: MVNLAKMRQRTEQNLREVSSFVKILFHTVLKFPMKINIGIHVHINMQAAQNAAADQNMQATNVIDLHNFKMGKDIGVDNKASATAHIYDEAHHTFLQLGAIKLLHAIPMIAGPVRCRLPIGFGHRFRG
SeqB: HYKNPMHSLLDSNVLHKDVVEVRLPIKIGMELDVMASAMREFLMPGTQQGDLRVIAEKRPVNKLHTYRRDLVKLLLAGAKLGTEAKSVELDLYRTELGGLVVYIININIATWDIIFAKVKICRGNDKP
```
Folded with Chai1:

## Demos
There are various demos with many more to come. For example, in `demo_dsm_ppi_full.py` (run by `python -m demos.demo_dsm_ppi_full`) we perform a test on DSM-ppi.
We take 1000 protein pairs from BIOGRID (real protein-protein interactions) and 1000 from Negatome (non interacting protein pairs) and mask the second sequence (SeqB) by 50%.
This acts as a sanity check, as we expect the accuracy on reconstructing real positive PPIs to be higher than the accuracy on non-interacting proteins.
Indeed, this is the case:
```console
==================================================
RESULTS COMPARISON
==================================================
Positive examples:
Mean accuracy: 0.495 ± 0.322
Processed: 1000 examples
Negative examples:
Mean accuracy: 0.227 ± 0.231
Processed: 1000 examples
Difference (Positive - Negative): 0.267
T-test: t=21.331, p=0.000
Difference is statistically significant (p < 0.05)
```
## Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd <repository-name>
```
2. **Initialize the submodules:**
```bash
git submodule update --init --remote --recursive
```
3. **Set up the Python virtual environment:**
The `setup_bioenv.sh` script creates a virtual environment named `bioenv` in your home directory (`~/bioenv`), installs PyTorch with CUDA 12.6 support, and then installs all other dependencies from `requirements.txt`.
Make the script executable:
```bash
chmod +x setup_bioenv.sh
```
Run the script:
```bash
./setup_bioenv.sh
```
If you are not on a linux machine, you can install the requirements directly
```console
python -m pip install -r requirements.txt
```
4. **Activate the environment:**
Each time you want to work on this project, activate the virtual environment:
```bash
source ~/bioenv/bin/activate
```
5. **To deactivate the environment:**
```bash
deactivate
```
## Training
The primary script for training models is `training/train_dsm.py`. This script further pretrains an ESM2 checkpoint using the DSM objective (masked diffusion based on LLaDA) on a large protein sequence dataset like [OMG-prot50](https://huggingface.co/datasets/Synthyra/omg_prot50).
### Main Training Script: `train_dsm.py`
- **Base Model**: DSM models are extended from pre-trained ESM2 checkpoints (e.g., ESM2-150M, ESM2-650M).
- **Training Objective**: Masked diffusion loss, where the model predicts masked tokens. The loss is scaled by `1/(t + epsilon)` where `t` is the corruption level, penalizing errors more at low mask rates.
- **Language Modeling Head**: Uses a modified head with a soft-logit cap (`tau=30`) and tied output projection weights to the token embeddings.
- **Data Handling**:
- Training data can be streamed from datasets like [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) (a version of Open MetaGenomic dataset clustered at 50% identity).
- Uses `data.dataset_classes.SequenceDatasetFromList` for validation/test sets and `data.dataset_classes.IterableDatasetFromHF` for streaming training.
- `data.data_collators.SequenceCollator` is used for batching.
- **Training Process**:
- Utilizes Hugging Face `TrainingArguments`.
- A custom `IterableTrainer` (from `training.iterable_trainer.py`) handles iterable datasets.
- Uses AdamW optimizer and a cosine learning rate scheduler with linear warmup.
- Supports logging to Weights & Biases (wandb).
- The trained model can be pushed to Hugging Face Hub.
- Example checkpoints mentioned in the paper: [DSM-150](https://huggingface.co/GleghornLab/DSM_150) (from ESM2-150M, 100k steps, batch 32, seqlen 512, LR 1e-4) and [DSM-650](https://huggingface.co/GleghornLab/DSM_650) (from ESM2-650M, 100k steps, global batch 128, seqlen 2048, LR 1e-4).
**Usage Example:**
```bash
python -m training.train_dsm \
--model_path facebook/esm2_t33_650M_UR50D \
--save_path GleghornLab/DSM_650 \
--lr 1e-4 \
--batch_size 8 \
--grad_accum 16 \
--max_steps 100000 \
--save_every 1000 \
--fp16 \
--wandb_project "DSM_Training" \
--token <your_hf_token_if_needed_for_private_repo_or_saving>
```
**Key Command-Line Arguments for `train_dsm.py`:**
* `--token`: Hugging Face token.
* `--model_path`: Path to the base ESM2 model to start from.
* `--save_path`: Path to save the trained DSM model on Hugging Face Hub.
* `--lr`: Learning rate.
* `--batch_size`: Batch size per device.
* `--grad_accum`: Gradient accumulation steps.
* `--max_steps`: Maximum training steps.
* `--wandb_project`: Wandb project name (default: `DSM`).
* `--max_length`: Maximum sequence length.
* `--save_every`: Save model and evaluate every N steps.
* `--fp16`: Enable mixed-precision training.
* `--bugfix`: Use small batch size and max length for debugging.
### Other Training Scripts (e.g., for DSM-ppi)
The `training/` directory may also contain scripts like `train_dsm_bind.py`.
- DSM-ppi (e.g., [DSM-150-ppi](https://huggingface.co/GleghornLab/DSM_150_ppi_lora), [DSM-650-ppi](https://huggingface.co/GleghornLab/DSM_650_ppi_lora)) is fine-tuned on PPI datasets.
- Training involves conditioning on a target sequence (SeqA) to generate an interactor (SeqB) using the format `[CLS]--SeqA--[EOS]--[MASKED~SeqB]--[EOS]`.
- LoRA (Low-Rank Adaptation) can be applied to attention layers for efficient fine-tuning.
And `training/iterable_trainer.py` provides the `get_iterable_trainer` function used by `train_dsm.py` to enable training with iterable datasets.
## Evaluation
The repository includes a comprehensive suite for evaluating model performance, focusing on:
1. **Sequence Reconstruction (Mask Filling):**
* Evaluated by masking validation/test sets at various corruption rates (5% to 90%) and measuring cross-entropy loss, weighted F1 score, and Alignment Score (ASc) for the masked positions.
* The script `evaluation/mask_filling.py` is central to this.
2. **Unconditional Generation Quality:**
* Generate a corpus of sequences based on lengths from a reference set (e.g., validation data).
* Compare distributions (1-mers, 2-mers, 3-mers) of amino acids and predicted secondary structures between generated and natural sequences using χ² test and Jensen-Shannon (JS) divergence.
* Compare distributions of predicted functional annotations (e.g., using Annotation Vocabulary - AV terms).
* Scripts involved: `evaluation/unconditional_generation_tuning.py` (to find optimal generation parameters like temperature and step divisor `s`), `evaluation/unconditional_generation.py`, `evaluation/ss_pred.py` (using [production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) or [production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model)), `evaluation/annotate_comparisons.py`, `evaluation/compare_distributions.py`, `evaluation/plot_distribution_comparisons.py`.
* The `run_eval_pipeline.py` script automates this workflow.
3. **Representation Quality (Model Probing):**
* Evaluate learned embeddings by training linear probes (or simple transformer blocks) on various downstream tasks (e.g., secondary structure prediction, localization prediction, etc.).
* Performance is compared against random vectors, randomized transformers, and other established pLMs.
* The assessment was done with [Protify](https://github.com/Synthyra/Protify), an open-source framework that can be used for pLM training and evaluation.
4. **Conditional Generation (Binder Design for DSM-ppi):**
* Evaluate DSM-ppi on benchmarks like BenchBB.
* Generate binders for target proteins using template-based masking strategies.
* Assess generated binders using *in-silico* tools like Synteract2 for predicted binding affinity (ppKd).
The `evaluation/` directory also contains a `readme.md` which provides further details on some evaluation workflows. Key metrics used include:
- **Alignment Score (ASc):** A normalized Needleman-Wunsch global alignment score (using BLOSUM62) to measure sequence similarity, robust to length variations. ASc(a, b) = l/(f(a, a) - f(a, b) + l).
- **Jensen-Shannon (JS) Divergence:** To compare distributions of k-mers and functional terms.
**Running the Full Unconditional Evaluation Pipeline:**
```bash
python run_eval_pipeline.py --token YOUR_HF_TOKEN --data_dir ./evaluation_results
```
Refer to `run_eval_pipeline.py --help` for more options, such as `--skip_tuning`.
### Mask Filling Evaluation
The script `evaluation/mask_filling.py` is used to evaluate models on their ability to predict masked tokens in a sequence across various masking rates.
- **Functionality:**
- Evaluates different models (DSM, DPLM, standard ESM models).
- Tests across multiple datasets ([Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50), [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090)).
- Calculates metrics: loss, perplexity, precision, recall, F1, accuracy, MCC, and alignment score.
- Saves detailed results to CSV files.
- Can generate a summary plot comparing model performance across different mask rates using `evaluation/plot_mask_fill_results.py`.
- **Usage Example:**
```bash
python -m evaluation.mask_filling \
--token YOUR_HF_TOKEN \
--batch_size 4 \
--mask_rates 0.15 0.30 0.50 \
--data_splits valid test \
--results_dir ./results/mask_fill_custom
```
To generate a comparison plot from existing results:
```bash
python -m evaluation.mask_filling --generate_comparison_plot --results_dir ./results/mask_fill_custom --plot_output ./results/mask_fill_custom/comparison.png
```
### Other Evaluation Scripts
The `evaluation/` directory contains additional scripts for more specific analyses. These are typically run independently:
- `evaluation/all_targets_uncond.py` and `evaluation/all_targets_cond.py`: Likely for evaluating generation towards specific targets, unconditionally and conditionally.
- `evaluation/conditional_binder.py` and `evaluation/unconditional_binder.py`: Suggest evaluation focused on generating protein binders.
- `evaluation/unconditional_by_length.py`: May evaluate unconditional generation focusing on sequence length distributions.
- `evaluation/utils.py`: Utility functions for evaluation scripts.
Users should refer to individual scripts (e.g., using `python -m evaluation.<script_name> --help`) for their specific usage and arguments.
The `evaluation/` directory also contains a `readme.md` which provides further details on the unconditional generation evaluation workflow.
## Results
DSM demonstrates strong performance in both protein sequence generation and representation learning, establishing masked diffusion as a powerful paradigm.
- **Biomimetic Sequence Generation**: Unconditionally generated DSM sequences closely mimic natural protein distributions in terms of amino acid k-mers, predicted secondary structures (JS divergence < 0.01 for AA k-mers), and predicted functional annotations (AV terms, JS divergence ~0.1). This suggests DSM captures underlying biological principles.
- **Superior Sequence Reconstruction**: DSM models significantly outperform MLM-based ESM2 models in reconstructing sequences from highly corrupted inputs (up to 90% masking).
- At 90% masking, DSM achieves an Alignment Score (ASc) of ~0.27, considerably higher than random.
- DSM models show higher F1 scores in reconstruction tasks compared to DPLM models, especially at high mask rates.
- **High-Quality Embeddings**: DSM embeddings match or exceed the quality of those from comparably sized pLMs (ESM2, DPLM) and even larger autoregressive models (ProtCLM 1B) on various downstream tasks evaluated by linear probing. [DSM-650](https://huggingface.co/GleghornLab/DSM_650) generally provides the best representations among tested models of similar size.
- **Effective Binder Design (DSM-ppi):**
- DSM-ppi fine-tuned on protein-protein interaction data, demonstrates the ability to generate protein binders conditioned on target sequences.
- On the BenchBB benchmark, DSM-generated binders (both unconditional DSM and conditional DSM-ppi) show promising predicted binding affinities, in some cases superior to known binders. For example, designs for EGFR showed high predicted pKd and good structural metrics (ipTM, pTM with AlphaFold3).
- **Efficiency**: DSM can generate realistic protein sequences from a single forward pass during reconstruction tasks at high mask rates, offering potential efficiency advantages over iterative AR or some discrete diffusion models.
These results highlight DSM's capability to unify high-quality protein representation learning and biologically coherent generative modeling within a single framework.
## Cite
```
@misc{hallee2025diffusionsequencemodelsenhanced,
title={Diffusion Sequence Models for Enhanced Protein Representation and Generation},
author={Logan Hallee and Nikolaos Rafailidis and David B. Bichara and Jason P. Gleghorn},
year={2025},
eprint={2506.08293},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2506.08293},
}
```
|
GleghornLab/DSM_150 | GleghornLab | 2025-06-23T15:22:48Z | 125 | 0 | transformers | [
"transformers",
"safetensors",
"esm_diff",
"custom_code",
"arxiv:2506.08293",
"endpoints_compatible",
"region:us"
] | null | 2025-04-18T19:06:19Z | ---
library_name: transformers
tags: []
---
# DSM: Diffusion Models for Protein Sequence Generation
### Note: This readme is shared between our GitHub and Huggingface pages.
## Table of Contents
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [Demos](#usage)
- [Local installation](#installation)
- [Training](#training)
- [Evaluation](#evaluation)
- [Results](#results)
- [Cite](#cite)
## Introduction
DSM (Diffusion Sequence Model) is a novel Protein Language Model (pLM) developed in collaboration between the [Gleghorn Lab](https://www.gleghornlab.com/) and [Synthyra](https://synthyra.com/). It was trained with masked diffusion to enable both high-quality representation learning and generative protein design. This repository contains the code for training, evaluating, and applying DSM and its variants.
DSM is capable of generating diverse, biomimetic sequences that align with expected amino acid compositions, secondary structures, and predicted functions. Furthermore, DSM's learned representations match or exceed those of comparably sized pLMs on various downstream tasks. DSM is detailed extensively in our [preprint](https://arxiv.org/abs/2506.08293) (which is currently in review). Beyond the base and PPI variants, we are currently training versions to jointly diffuse over sequence and foldseek tokens, as well as [Annotation Vocabulary](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1) tokens. Since the preprint release, Synthyra has trained [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) which neglects the LoRA PPI training in favor for full finetuning. Additionally, the sequences SeqA and SeqB are jointly masked instead of just SeqB in the original version. We plan on adding the **many** new results to the second version of the preprint and eventual journal article.
## Models
Relevant Huggingface hosted models and datasets
- **Base DSM Models**:
- [GleghornLab/DSM_150](https://huggingface.co/GleghornLab/DSM_150) - 150M parameter DSM model
- [GleghornLab/DSM_650](https://huggingface.co/GleghornLab/DSM_650) - 650M parameter DSM model
- **DSM-ppi Models**:
(LoRA versions - results reported in paper but not recommended for real use)
- [GleghornLab/DSM_150_ppi_lora](https://huggingface.co/GleghornLab/DSM_150_ppi_lora) - 150M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_650_ppi_lora](https://huggingface.co/GleghornLab/DSM_650_ppi_lora) - 650M parameter LoRA DSM-ppi model
- [GleghornLab/DSM_150_ppi_control](https://huggingface.co/GleghornLab/DSM_150_ppi_control) - Control version of LoRA DSM-ppi
(Fully finetuned - recommended for real use)
- [Synthyra/DSM_ppi_full](https://huggingface.co/Synthyra/DSM_ppi_full) - 650M parameter DSM-ppi model
- **Datasets**:
- [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) - Open MetaGenomic dataset clustered at 50% identity (207M sequences)
- [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090) - STRING database model organisms (653k sequences)
- **Utility Models**:
- [GleghornLab/production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) - Secondary structure prediction (4-class)
- [GleghornLab/production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model) - Secondary structure prediction (9-class)
## Usage
This section outlines how to use a trained `DSM` model for common generation tasks. The core generation logic is provided by the `GenerateMixin` class, used by `DSM` models.
First, ensure you have a trained model (either one you trained or a pre-trained one from Hugging Face Hub) and the necessary environment set up.
```python
import torch
from models.modeling_dsm import DSM # Or DSM_ppi for binder generation
# Load a pre-trained model
model_name_or_path = "GleghornLab/DSM_650" # Replace with your model of choice
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DSM.from_pretrained(model_name_or_path).to(device).eval()
tokenizer = model.tokenizer
```
```console
You are using a model of type esm_diff to instantiate a model of type dsm. This is not supported for all configurations of models and can yield errors.
```
This warning is normal - all good!
### 1. Unconditional Sequence Generation
To generate a novel sequence of a specific length. DSM uses a progressive denoising approach.
```python
### Unconditional generation
length = 100
mask_token = tokenizer.mask_token
# optionally, enforce starting with methionine
input_tokens = tokenizer.encode('M' + ''.join([mask_token] * (length - 1)), add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MFRVDALQVAQQETLAIGRSTAYDKQESPSMAQRQVLTQLAAYGGENDLRQICIPAERRNFLSIANGASYQFVEEDNEANGGYWSPHKAGLPESACKRFI
```
### 2. Mask Filling (Inpainting)
To fill in masked regions of a template sequence:
```python
# Mask Filling / Inpainting
template_sequence = "MA<mask><mask><mask>KEG<mask><mask>STL"
input_tokens = tokenizer.encode(template_sequence, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
generated_sequences = model.decode_output(output)
print(f"Generated sequence: {generated_sequences[0]}")
```
```console
Generated sequence: MAVKFKEGGISTL
```
### 3. Conditional Generation (e.g., Binders - using DSM-ppi)
```python
# from models.modeling_dsm import DSM_ppi
# model_binder = DSM_ppi.from_pretrained("GleghornLab/DSM_650_ppi_lora").to(device).eval()
# The lora version from the paper leads to unreliable outputs
# Synthyra has generously trained a version through full fine tuning
model = DSM.from_pretrained("Synthyra/DSM_ppi_full").to(device).eval()
# BBF-14
target_seq = "MGTPLWALLGGPWRGTATYEDGTKVTLDYRYTRVSPDRLRADVTYTTPDGTTLEATVDLWKDANGVIRYHATYPDGTSADGTLTQLDADTLLATGTYDDGTKYTVTLTRVAPGSGWHHHHHH"
# For binder generation, the 'interactor' (SeqB) part is what gets generated/filled.
# Start with a fully masked interactor of desired length.
interactor_template_len = 256
interactor_template = ''.join([mask_token] * interactor_template_len)
combined_input_str = target_seq + '<eos>' + interactor_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=100, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
target, binder = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"Generated binder {binder[0]}")
```
```console
Generated binder HRHHHRRPTHARETEWLARMRLGIAEHQRIAVPRSDLEPDQMRERAADNQRLVKEYDQVIDHQTEGSTERLFEVLRVWEQVNTEQAHHEASAALEFGRVGYPDDEGGRAFYTQANAHKKDLVEYIGGIDEDAKWDPRIAWLMPEGGQPVKATVIGVSEERINGLKVLDDHWGRERRLWLINLFTALQAYDDPTRPTQVTLTPATDQLTNDVQYLLLSTRYTPPGVTTAVKIRKLDGRTLKVLTTEAPYVVRGATLS
```
Folded with Chai1:

`Synthyra/DSM_ppi_full` was actually trained to fill masks from any part of SeqA and SeqB. That means you can fully hallucinate plausibly interacting protein pairs.
```python
seq_a_length = 128
seq_b_length = 128
seq_a_template = ''.join([mask_token] * seq_a_length)
seq_b_template = ''.join([mask_token] * seq_b_length)
combined_input_str = seq_a_template + '<eos>' + seq_b_template
input_tokens = tokenizer.encode(combined_input_str, add_special_tokens=True, return_tensors='pt').to(device)
output = model.mask_diffusion_generate(
tokenizer=tokenizer,
input_tokens=input_tokens,
step_divisor=10, # lower is slower but better
temperature=1.0, # sampling temperature
remasking="random", # strategy for remasking tokens not kept
preview=False, # set this to True to watch the mask tokens get rilled in real time
slow=False, # adds a small delay to the real time filling (because it is usually very fast and watching carefully is hard!)
return_trajectory=False # set this to True to return the trajectory of the generation (what you watch in the preview)
) # Note: output will be a tuple if return_trajectory is True
seqa, seqb = model.decode_dual_input(output, seperator='<eos>')
# Parse out the generated interactor part based on EOS tokens.
# Example: generated_full_seq_str.split(model_binder.tokenizer.eos_token)[1]
print(f"SeqA: {seqa[0][5:]}") # remove cls token
print(f"SeqB: {seqb[0]}")
```
```console
SeqA: MVNLAKMRQRTEQNLREVSSFVKILFHTVLKFPMKINIGIHVHINMQAAQNAAADQNMQATNVIDLHNFKMGKDIGVDNKASATAHIYDEAHHTFLQLGAIKLLHAIPMIAGPVRCRLPIGFGHRFRG
SeqB: HYKNPMHSLLDSNVLHKDVVEVRLPIKIGMELDVMASAMREFLMPGTQQGDLRVIAEKRPVNKLHTYRRDLVKLLLAGAKLGTEAKSVELDLYRTELGGLVVYIININIATWDIIFAKVKICRGNDKP
```
Folded with Chai1:

## Demos
There are various demos with many more to come. For example, in `demo_dsm_ppi_full.py` (run by `python -m demos.demo_dsm_ppi_full`) we perform a test on DSM-ppi.
We take 1000 protein pairs from BIOGRID (real protein-protein interactions) and 1000 from Negatome (non interacting protein pairs) and mask the second sequence (SeqB) by 50%.
This acts as a sanity check, as we expect the accuracy on reconstructing real positive PPIs to be higher than the accuracy on non-interacting proteins.
Indeed, this is the case:
```console
==================================================
RESULTS COMPARISON
==================================================
Positive examples:
Mean accuracy: 0.495 ± 0.322
Processed: 1000 examples
Negative examples:
Mean accuracy: 0.227 ± 0.231
Processed: 1000 examples
Difference (Positive - Negative): 0.267
T-test: t=21.331, p=0.000
Difference is statistically significant (p < 0.05)
```
## Installation
1. **Clone the repository:**
```bash
git clone <repository-url>
cd <repository-name>
```
2. **Initialize the submodules:**
```bash
git submodule update --init --remote --recursive
```
3. **Set up the Python virtual environment:**
The `setup_bioenv.sh` script creates a virtual environment named `bioenv` in your home directory (`~/bioenv`), installs PyTorch with CUDA 12.6 support, and then installs all other dependencies from `requirements.txt`.
Make the script executable:
```bash
chmod +x setup_bioenv.sh
```
Run the script:
```bash
./setup_bioenv.sh
```
If you are not on a linux machine, you can install the requirements directly
```console
python -m pip install -r requirements.txt
```
4. **Activate the environment:**
Each time you want to work on this project, activate the virtual environment:
```bash
source ~/bioenv/bin/activate
```
5. **To deactivate the environment:**
```bash
deactivate
```
## Training
The primary script for training models is `training/train_dsm.py`. This script further pretrains an ESM2 checkpoint using the DSM objective (masked diffusion based on LLaDA) on a large protein sequence dataset like [OMG-prot50](https://huggingface.co/datasets/Synthyra/omg_prot50).
### Main Training Script: `train_dsm.py`
- **Base Model**: DSM models are extended from pre-trained ESM2 checkpoints (e.g., ESM2-150M, ESM2-650M).
- **Training Objective**: Masked diffusion loss, where the model predicts masked tokens. The loss is scaled by `1/(t + epsilon)` where `t` is the corruption level, penalizing errors more at low mask rates.
- **Language Modeling Head**: Uses a modified head with a soft-logit cap (`tau=30`) and tied output projection weights to the token embeddings.
- **Data Handling**:
- Training data can be streamed from datasets like [Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50) (a version of Open MetaGenomic dataset clustered at 50% identity).
- Uses `data.dataset_classes.SequenceDatasetFromList` for validation/test sets and `data.dataset_classes.IterableDatasetFromHF` for streaming training.
- `data.data_collators.SequenceCollator` is used for batching.
- **Training Process**:
- Utilizes Hugging Face `TrainingArguments`.
- A custom `IterableTrainer` (from `training.iterable_trainer.py`) handles iterable datasets.
- Uses AdamW optimizer and a cosine learning rate scheduler with linear warmup.
- Supports logging to Weights & Biases (wandb).
- The trained model can be pushed to Hugging Face Hub.
- Example checkpoints mentioned in the paper: [DSM-150](https://huggingface.co/GleghornLab/DSM_150) (from ESM2-150M, 100k steps, batch 32, seqlen 512, LR 1e-4) and [DSM-650](https://huggingface.co/GleghornLab/DSM_650) (from ESM2-650M, 100k steps, global batch 128, seqlen 2048, LR 1e-4).
**Usage Example:**
```bash
python -m training.train_dsm \
--model_path facebook/esm2_t33_650M_UR50D \
--save_path GleghornLab/DSM_650 \
--lr 1e-4 \
--batch_size 8 \
--grad_accum 16 \
--max_steps 100000 \
--save_every 1000 \
--fp16 \
--wandb_project "DSM_Training" \
--token <your_hf_token_if_needed_for_private_repo_or_saving>
```
**Key Command-Line Arguments for `train_dsm.py`:**
* `--token`: Hugging Face token.
* `--model_path`: Path to the base ESM2 model to start from.
* `--save_path`: Path to save the trained DSM model on Hugging Face Hub.
* `--lr`: Learning rate.
* `--batch_size`: Batch size per device.
* `--grad_accum`: Gradient accumulation steps.
* `--max_steps`: Maximum training steps.
* `--wandb_project`: Wandb project name (default: `DSM`).
* `--max_length`: Maximum sequence length.
* `--save_every`: Save model and evaluate every N steps.
* `--fp16`: Enable mixed-precision training.
* `--bugfix`: Use small batch size and max length for debugging.
### Other Training Scripts (e.g., for DSM-ppi)
The `training/` directory may also contain scripts like `train_dsm_bind.py`.
- DSM-ppi (e.g., [DSM-150-ppi](https://huggingface.co/GleghornLab/DSM_150_ppi_lora), [DSM-650-ppi](https://huggingface.co/GleghornLab/DSM_650_ppi_lora)) is fine-tuned on PPI datasets.
- Training involves conditioning on a target sequence (SeqA) to generate an interactor (SeqB) using the format `[CLS]--SeqA--[EOS]--[MASKED~SeqB]--[EOS]`.
- LoRA (Low-Rank Adaptation) can be applied to attention layers for efficient fine-tuning.
And `training/iterable_trainer.py` provides the `get_iterable_trainer` function used by `train_dsm.py` to enable training with iterable datasets.
## Evaluation
The repository includes a comprehensive suite for evaluating model performance, focusing on:
1. **Sequence Reconstruction (Mask Filling):**
* Evaluated by masking validation/test sets at various corruption rates (5% to 90%) and measuring cross-entropy loss, weighted F1 score, and Alignment Score (ASc) for the masked positions.
* The script `evaluation/mask_filling.py` is central to this.
2. **Unconditional Generation Quality:**
* Generate a corpus of sequences based on lengths from a reference set (e.g., validation data).
* Compare distributions (1-mers, 2-mers, 3-mers) of amino acids and predicted secondary structures between generated and natural sequences using χ² test and Jensen-Shannon (JS) divergence.
* Compare distributions of predicted functional annotations (e.g., using Annotation Vocabulary - AV terms).
* Scripts involved: `evaluation/unconditional_generation_tuning.py` (to find optimal generation parameters like temperature and step divisor `s`), `evaluation/unconditional_generation.py`, `evaluation/ss_pred.py` (using [production_ss4_model](https://huggingface.co/GleghornLab/production_ss4_model) or [production_ss9_model](https://huggingface.co/GleghornLab/production_ss9_model)), `evaluation/annotate_comparisons.py`, `evaluation/compare_distributions.py`, `evaluation/plot_distribution_comparisons.py`.
* The `run_eval_pipeline.py` script automates this workflow.
3. **Representation Quality (Model Probing):**
* Evaluate learned embeddings by training linear probes (or simple transformer blocks) on various downstream tasks (e.g., secondary structure prediction, localization prediction, etc.).
* Performance is compared against random vectors, randomized transformers, and other established pLMs.
* The assessment was done with [Protify](https://github.com/Synthyra/Protify), an open-source framework that can be used for pLM training and evaluation.
4. **Conditional Generation (Binder Design for DSM-ppi):**
* Evaluate DSM-ppi on benchmarks like BenchBB.
* Generate binders for target proteins using template-based masking strategies.
* Assess generated binders using *in-silico* tools like Synteract2 for predicted binding affinity (ppKd).
The `evaluation/` directory also contains a `readme.md` which provides further details on some evaluation workflows. Key metrics used include:
- **Alignment Score (ASc):** A normalized Needleman-Wunsch global alignment score (using BLOSUM62) to measure sequence similarity, robust to length variations. ASc(a, b) = l/(f(a, a) - f(a, b) + l).
- **Jensen-Shannon (JS) Divergence:** To compare distributions of k-mers and functional terms.
**Running the Full Unconditional Evaluation Pipeline:**
```bash
python run_eval_pipeline.py --token YOUR_HF_TOKEN --data_dir ./evaluation_results
```
Refer to `run_eval_pipeline.py --help` for more options, such as `--skip_tuning`.
### Mask Filling Evaluation
The script `evaluation/mask_filling.py` is used to evaluate models on their ability to predict masked tokens in a sequence across various masking rates.
- **Functionality:**
- Evaluates different models (DSM, DPLM, standard ESM models).
- Tests across multiple datasets ([Synthyra/omg_prot50](https://huggingface.co/datasets/Synthyra/omg_prot50), [GleghornLab/stringv12_modelorgs_9090](https://huggingface.co/datasets/GleghornLab/stringv12_modelorgs_9090)).
- Calculates metrics: loss, perplexity, precision, recall, F1, accuracy, MCC, and alignment score.
- Saves detailed results to CSV files.
- Can generate a summary plot comparing model performance across different mask rates using `evaluation/plot_mask_fill_results.py`.
- **Usage Example:**
```bash
python -m evaluation.mask_filling \
--token YOUR_HF_TOKEN \
--batch_size 4 \
--mask_rates 0.15 0.30 0.50 \
--data_splits valid test \
--results_dir ./results/mask_fill_custom
```
To generate a comparison plot from existing results:
```bash
python -m evaluation.mask_filling --generate_comparison_plot --results_dir ./results/mask_fill_custom --plot_output ./results/mask_fill_custom/comparison.png
```
### Other Evaluation Scripts
The `evaluation/` directory contains additional scripts for more specific analyses. These are typically run independently:
- `evaluation/all_targets_uncond.py` and `evaluation/all_targets_cond.py`: Likely for evaluating generation towards specific targets, unconditionally and conditionally.
- `evaluation/conditional_binder.py` and `evaluation/unconditional_binder.py`: Suggest evaluation focused on generating protein binders.
- `evaluation/unconditional_by_length.py`: May evaluate unconditional generation focusing on sequence length distributions.
- `evaluation/utils.py`: Utility functions for evaluation scripts.
Users should refer to individual scripts (e.g., using `python -m evaluation.<script_name> --help`) for their specific usage and arguments.
The `evaluation/` directory also contains a `readme.md` which provides further details on the unconditional generation evaluation workflow.
## Results
DSM demonstrates strong performance in both protein sequence generation and representation learning, establishing masked diffusion as a powerful paradigm.
- **Biomimetic Sequence Generation**: Unconditionally generated DSM sequences closely mimic natural protein distributions in terms of amino acid k-mers, predicted secondary structures (JS divergence < 0.01 for AA k-mers), and predicted functional annotations (AV terms, JS divergence ~0.1). This suggests DSM captures underlying biological principles.
- **Superior Sequence Reconstruction**: DSM models significantly outperform MLM-based ESM2 models in reconstructing sequences from highly corrupted inputs (up to 90% masking).
- At 90% masking, DSM achieves an Alignment Score (ASc) of ~0.27, considerably higher than random.
- DSM models show higher F1 scores in reconstruction tasks compared to DPLM models, especially at high mask rates.
- **High-Quality Embeddings**: DSM embeddings match or exceed the quality of those from comparably sized pLMs (ESM2, DPLM) and even larger autoregressive models (ProtCLM 1B) on various downstream tasks evaluated by linear probing. [DSM-650](https://huggingface.co/GleghornLab/DSM_650) generally provides the best representations among tested models of similar size.
- **Effective Binder Design (DSM-ppi):**
- DSM-ppi fine-tuned on protein-protein interaction data, demonstrates the ability to generate protein binders conditioned on target sequences.
- On the BenchBB benchmark, DSM-generated binders (both unconditional DSM and conditional DSM-ppi) show promising predicted binding affinities, in some cases superior to known binders. For example, designs for EGFR showed high predicted pKd and good structural metrics (ipTM, pTM with AlphaFold3).
- **Efficiency**: DSM can generate realistic protein sequences from a single forward pass during reconstruction tasks at high mask rates, offering potential efficiency advantages over iterative AR or some discrete diffusion models.
These results highlight DSM's capability to unify high-quality protein representation learning and biologically coherent generative modeling within a single framework.
## Cite
```
@misc{hallee2025diffusionsequencemodelsenhanced,
title={Diffusion Sequence Models for Enhanced Protein Representation and Generation},
author={Logan Hallee and Nikolaos Rafailidis and David B. Bichara and Jason P. Gleghorn},
year={2025},
eprint={2506.08293},
archivePrefix={arXiv},
primaryClass={q-bio.BM},
url={https://arxiv.org/abs/2506.08293},
}
```
|
blazarev/roberta-structure-hub | blazarev | 2025-06-23T15:21:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-23T15:21:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AylaEmeryIris/Jaipur.hotel.couple.viral.video | AylaEmeryIris | 2025-06-23T15:21:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T15:18:50Z | [](https://t-me-viral-now01.blogspot.com/2025/06/ghds.html) |
dengcao/Qwen3-Reranker-8B-GGUF | dengcao | 2025-06-23T15:20:21Z | 0 | 0 | transformers | [
"transformers",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T01:46:18Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-8B-Base
library_name: transformers
---
# <span style="color: #7FFF7F;">Qwen3-Reranker-8B GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`1caae7fc`](https://github.com/ggerganov/llama.cpp/commit/1caae7fc6c77551cb1066515e0f414713eebb367).
## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span>
Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
### **Benchmark Context**
All tests conducted on **Llama-3-8B-Instruct** using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
### **Method**
- **Dynamic Precision Allocation**:
- First/Last 25% of layers → IQ4_XS (selected layers)
- Middle 50% → IQ2_XXS/IQ3_S (increase efficiency)
- **Critical Component Protection**:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
### **Quantization Performance Comparison (Llama-3-8B)**
| Quantization | Standard PPL | DynamicGate PPL | Δ PPL | Std Size | DG Size | Δ Size | Std Speed | DG Speed |
|--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------|
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
**Key**:
- PPL = Perplexity (lower is better)
- Δ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
**Key Improvements:**
- 🔥 **IQ1_M** shows massive 43.9% perplexity reduction (27.46 → 15.41)
- 🚀 **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB
- ⚡ **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization
**Tradeoffs:**
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
### **When to Use These Models**
📌 **Fitting models into GPU VRAM**
✔ **Memory-constrained deployments**
✔ **Cpu and Edge Devices** where 1-2bit errors can be tolerated
✔ **Research** into ultra-low-bit quantization
## **Choosing the Right Model Format**
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
📌 **Use BF16 if:**
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
✔ You want **higher precision** while saving memory.
✔ You plan to **requantize** the model into another format.
📌 **Avoid BF16 if:**
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.
---
### **F16 (Float 16) – More widely supported than BF16**
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
📌 **Use F16 if:**
✔ Your hardware supports **FP16** but **not BF16**.
✔ You need a **balance between speed, memory usage, and accuracy**.
✔ You are running on a **GPU** or another device optimized for FP16 computations.
📌 **Avoid F16 if:**
❌ Your device lacks **native FP16 support** (it may run slower than expected).
❌ You have memory limitations.
---
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
📌 **Use Quantized Models if:**
✔ You are running inference on a **CPU** and need an optimized model.
✔ Your device has **low VRAM** and cannot load full-precision models.
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
📌 **Avoid Quantized Models if:**
❌ You need **maximum accuracy** (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
---
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
- **IQ3_S**: Small block size for **maximum memory efficiency**.
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
---
### **Summary Table: Model Format Selection**
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|--------------|------------|---------------|----------------------|---------------|
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
❤ **Please click "Like" if you find this useful!**
Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Free Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4o-mini)
- `HugLLM` (Hugginface Open-source)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**)
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4o-mini** for:
- **Create custom cmd processors to run .net code on Free Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API
### 💡 **Example commands to you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Free Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Free Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
# Qwen3-Reranker-8B
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
**Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks No.1 in the MTEB multilingual leaderboard (as of June 5, 2025, score 70.58), while the reranking model excels in various text retrieval scenarios.
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
## Model Overview
**Qwen3-Reranker-8B** has the following features:
- Model Type: Text Reranking
- Supported Languages: 100+ Languages
- Number of Paramaters: 8B
- Context Length: 32k
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
## Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
| Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
> **Note**:
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
> - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
## Usage
With Transformers versions earlier than 4.51.0, you may encounter the following error:
```
KeyError: 'qwen3'
```
### Transformers Usage
```python
# Requires transformers>=4.51.0
import torch
from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM
def format_instruction(instruction, query, doc):
if instruction is None:
instruction = 'Given a web search query, retrieve relevant passages that answer the query'
output = "<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}".format(instruction=instruction,query=query, doc=doc)
return output
def process_inputs(pairs):
inputs = tokenizer(
pairs, padding=False, truncation='longest_first',
return_attention_mask=False, max_length=max_length - len(prefix_tokens) - len(suffix_tokens)
)
for i, ele in enumerate(inputs['input_ids']):
inputs['input_ids'][i] = prefix_tokens + ele + suffix_tokens
inputs = tokenizer.pad(inputs, padding=True, return_tensors="pt", max_length=max_length)
for key in inputs:
inputs[key] = inputs[key].to(model.device)
return inputs
@torch.no_grad()
def compute_logits(inputs, **kwargs):
batch_scores = model(**inputs).logits[:, -1, :]
true_vector = batch_scores[:, token_true_id]
false_vector = batch_scores[:, token_false_id]
batch_scores = torch.stack([false_vector, true_vector], dim=1)
batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)
scores = batch_scores[:, 1].exp().tolist()
return scores
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Reranker-8B", padding_side='left')
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-8B").eval()
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-8B", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
token_false_id = tokenizer.convert_tokens_to_ids("no")
token_true_id = tokenizer.convert_tokens_to_ids("yes")
max_length = 8192
prefix = "<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\".<|im_end|>\n<|im_start|>user\n"
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
prefix_tokens = tokenizer.encode(prefix, add_special_tokens=False)
suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = ["What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]
# Tokenize the input texts
inputs = process_inputs(pairs)
scores = compute_logits(inputs)
print("scores: ", scores)
```
📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
## Evaluation
| Model | Param | MTEB-R | CMTEB-R | MMTEB-R | MLDR | MTEB-Code | FollowIR |
|------------------------------------|--------|---------|---------|---------|--------|-----------|----------|
| **Qwen3-Embedding-0.6B** | 0.6B | 61.82 | 71.02 | 64.64 | 50.26 | 75.41 | 5.09 |
| Jina-multilingual-reranker-v2-base | 0.3B | 58.22 | 63.37 | 63.73 | 39.66 | 58.98 | -0.68 |
| gte-multilingual-reranker-base | 0.3B | 59.51 | 74.08 | 59.44 | 66.33 | 54.18 | -1.64 |
| BGE-reranker-v2-m3 | 0.6B | 57.03 | 72.16 | 58.36 | 59.51 | 41.38 | -0.01 |
| **Qwen3-Reranker-0.6B** | 0.6B | 65.80 | 71.31 | 66.36 | 67.28 | 73.42 | 5.41 |
| **Qwen3-Reranker-4B** | 1.7B | **69.76** | 75.94 | 72.74 | 69.97 | 81.20 | **14.84** |
| **Qwen3-Reranker-8B** | 8B | 69.02 | **77.45** | **72.94** | **70.19** | **81.22** | 8.05 |
> **Note**:
> - Evaluation results for reranking models. We use the retrieval subsets of MTEB(eng, v2), MTEB(cmn, v1), MMTEB and MTEB (Code), which are MTEB-R, CMTEB-R, MMTEB-R and MTEB-Code.
> - All scores are our runs based on the top-100 candidates retrieved by dense embedding model [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3-embedding,
title = {Qwen3-Embedding},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {May},
year = {2025}
}
``` |
Alecardo/testlast-68596ed5b43fcca98eed7d70 | Alecardo | 2025-06-23T15:20:14Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-23T15:12:21Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Testlast 68596Ed5B43Fcca98Eed7D70
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Alecardo/testlast-68596ed5b43fcca98eed7d70/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Alecardo/testlast-68596ed5b43fcca98eed7d70', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Alecardo/testlast-68596ed5b43fcca98eed7d70/discussions) to add images that show off what you’ve made with this LoRA.
|
Alibaba-NLP/WebDancer-32B | Alibaba-NLP | 2025-06-23T15:15:00Z | 0 | 4 | null | [
"safetensors",
"qwen2",
"base_model:Qwen/QwQ-32B",
"base_model:finetune:Qwen/QwQ-32B",
"license:mit",
"region:us"
] | null | 2025-06-23T06:45:44Z | ---
license: mit
base_model:
- Qwen/QwQ-32B
---
- Native agentic search reasoning model using ReAct framework towards autonomous information seeking agency and Deep Research-like model.
- We introduce a four-stage training paradigm comprising browsing data construction, trajectory sampling, supervised fine-tuning for effective cold start, and reinforcement learning for improved generalization, enabling the agent to autonomously acquire autonomous search and reasoning skills.
- Our data-centric approach integrates trajectory-level supervision fine-tuning and reinforcement learning (DAPO) to develop a scalable pipeline for training agentic systems via SFT or RL.
- WebDancer achieves a Pass@3 score of 61.1% on GAIA and 54.6% on WebWalkerQA. |
leobianco/npov_RM_google_S130104_LLM_false_STRUCT_false_epo3_lr1e-3_r8_2506231509 | leobianco | 2025-06-23T15:13:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T15:09:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IntelliGrow/ppo-SnowballTarget | IntelliGrow | 2025-06-23T15:11:52Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-06-23T15:11:48Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: IntelliGrow/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
king-001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rangy_eager_cobra | king-001 | 2025-06-23T15:11:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rangy eager cobra",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T18:20:53Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rangy_eager_cobra
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rangy eager cobra
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rangy_eager_cobra
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="king-001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rangy_eager_cobra", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zliang1233214/xiaoyi | zliang1233214 | 2025-06-23T15:10:22Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T15:04:12Z | ---
license: apache-2.0
---
|
SPBstrike/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_hunting_toucan | SPBstrike | 2025-06-23T15:03:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive hunting toucan",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T11:53:17Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_hunting_toucan
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive hunting toucan
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_hunting_toucan
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SPBstrike/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_hunting_toucan", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
pico-lm/pico-decoder-medium | pico-lm | 2025-06-23T15:02:24Z | 1,042 | 0 | null | [
"safetensors",
"pico_decoder",
"text-generation",
"custom_code",
"en",
"dataset:pico-lm/pretokenized-dolma",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-02-20T06:45:02Z | ---
datasets:
- pico-lm/pretokenized-dolma
language:
- en
license: apache-2.0
metrics:
- pico-lm/perplexity
pipeline_tag: text-generation
---
# Pico Decoder Medium
**pico-decoder-medium** is a 181M parameter model in the `pico-decoder` suite, balancing scale and analyzability. Built with [`pico-train`](https://github.com/pico-lm) and instrumented with [`pico-analyze`](https://github.com/pico-lm), it enables detailed studies of layer-wise learning behavior during language model pretraining.
> NOTE: The `pico-decoder-medium-1` branch contains the full commit history for the training run.
## 🔧 Model Details
| Field | Value |
|---------------------|------------------------------------|
| **Architecture** | Decoder-only transformer (LLaMA-style) |
| **Parameters** | 181M |
| **Layers** | 12 |
| **Hidden Size** | 768 |
| **Feed Forward Size**| 3072 |
| **Attention Heads** | 12 |
| **Key/Value Heads** | 4 |
## 📚 Training
- **Dataset**: [`pretokenized-dolma`](https://github.com/pico-lm)
- **Training steps**: 200,000
- **Batch size**: 1024
- **Sequence length**: 2048
- **Optimizer**: AdamW
- **Learning rate schedule**: Linear decay with warmup
- **Compute**: 16 A100-SXM4-80GB GPUs
## 📈 Evaluation and Analysis
This model supports fine-grained analysis using [pico-analyze](https://github.com/pico-lm). This tool enables researchers to understand how learning unfolds over training, even at very small scales.
We also evaluate perplexity of the model on the [pico-paloma-tinsy](https://huggingface.co/datasets/pico-lm/pretokenized-paloma-tinsy) dataset.
## 📄 Citation
```bibtex
@software{pico2025,
author = {Diehl Martinez, Richard},
title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
year = {2025},
url = {https://github.com/pico-lm}
}
|
Bedru/test_whisper_v3_finetuning_mozilla | Bedru | 2025-06-23T15:00:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-23T15:00:47Z | ---
base_model: OpenAI/whisper-large-v3
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
xsvmmy/finetune_qwen | xsvmmy | 2025-06-23T14:59:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-23T14:54:51Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** xsvmmy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
En1gma02/Parler-TTS-Mini-v0.1-Indian-Male-Accent-Hindi-Kaggle | En1gma02 | 2025-06-23T14:58:08Z | 12 | 1 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"Text-to-Speech",
"arxiv:2506.16310",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-21T21:52:34Z | ---
library_name: transformers
tags:
- Text-to-Speech
- arxiv:2506.16310
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
En1gma02/Parler-TTS-Mini-v0.1-Indian-Accent-Kaggle | En1gma02 | 2025-06-23T14:57:47Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"Text-to-Speech",
"arxiv:2506.16310",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-21T21:02:48Z | ---
library_name: transformers
tags:
- Text-to-Speech
- arxiv:2506.16310
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zveryonak/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_agile_gecko | zveryonak | 2025-06-23T14:57:22Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fleecy agile gecko",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T01:29:46Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_agile_gecko
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fleecy agile gecko
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_agile_gecko
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zveryonak/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_agile_gecko", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
En1gma02/Parler-TTS-Mini-v1-English-Emotions | En1gma02 | 2025-06-23T14:56:27Z | 53 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"Text-to-Speech",
"arxiv:2506.16310",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-24T18:21:47Z | ---
library_name: transformers
tags:
- Text-to-Speech
- arxiv:2506.16310
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-3-seed-7-2025-06-23 | morturr | 2025-06-23T14:54:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T14:54:04Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-3-seed-7-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-dadjokes-comb-3-seed-7-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
kunal-kk/ift-llama32_1b-maha-defParams | kunal-kk | 2025-06-23T14:53:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T14:53:30Z | ---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kunal-kk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
johngreendr1/3a3b5849-36b2-4f81-9d35-6962a5a92f76 | johngreendr1 | 2025-06-23T14:52:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2025-06-23T14:52:46Z | ---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Alkimia/anything_v3_inpainting | Alkimia | 2025-06-23T14:52:04Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-06-23T14:39:51Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danelcsb/sam2.1_hiera_large | danelcsb | 2025-06-23T14:48:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"sam2",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T14:48:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Savyasaachin/deepseek-coder-6.7b-instruct-Q5_K_M-GGUF | Savyasaachin | 2025-06-23T14:48:10Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:quantized:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T14:47:50Z | ---
license: other
license_name: deepseek
license_link: LICENSE
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
---
# Savyasaachin/deepseek-coder-6.7b-instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/deepseek-coder-6.7b-instruct`](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Savyasaachin/deepseek-coder-6.7b-instruct-Q5_K_M-GGUF --hf-file deepseek-coder-6.7b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Savyasaachin/deepseek-coder-6.7b-instruct-Q5_K_M-GGUF --hf-file deepseek-coder-6.7b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Savyasaachin/deepseek-coder-6.7b-instruct-Q5_K_M-GGUF --hf-file deepseek-coder-6.7b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Savyasaachin/deepseek-coder-6.7b-instruct-Q5_K_M-GGUF --hf-file deepseek-coder-6.7b-instruct-q5_k_m.gguf -c 2048
```
|
gsarch/ViGoRL-Multiturn-MCTS-SFT-7b-Visual-Search | gsarch | 2025-06-23T14:47:24Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:2505.23678",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-13T21:26:27Z | ---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
---
# ViGoRL: Visually Grounded Reinforcement Learning for Visual Reasoning
This model card describes the ViGoRL (**Vi**sually **G**r**o**unded **R**einforcement **L**earning) model, introduced in our paper ["Grounded Reinforcement Learning for Visual Reasoning"](https://arxiv.org/abs/2505.23678).
**Authors:** Gabriel Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
---
## Model Overview
ViGoRL is a vision-language model fine-tuned using reinforcement learning (RL) to explicitly anchor textual reasoning steps to visual coordinates. Inspired by human visual cognition, ViGoRL employs multi-turn visual grounding, dynamically zooming into image regions to perform fine-grained visual reasoning and grounding.
This model was trained using supervised fine-tuning (SFT) on visually-grounded reasoning traces generated via Monte Carlo Tree Search (MCTS), followed by reinforcement learning with Group Relative Policy Optimization (GRPO).
---
## Model Details
* **Base Architecture:** Qwen2.5-Vision-Language (3B or 7B parameters)
* **Training Paradigm:**
* Supervised Fine-Tuning on MCTS-generated reasoning traces
* Group Relative Policy Optimization (GRPO)
* Multi-turn visual grounding with dynamic zoom-in feedback (if "Multiturn" appears in name)
---
## Use Cases
This model excels in visual reasoning tasks that require precise visual grounding and region-level reasoning. Please see model name for specific domain.
* **Spatial Reasoning:** SAT-2, BLINK, RoboSpatial
* **Visual Search:** V\*Bench
* **Web Interaction and Grounding:** ScreenSpot (Pro and V2), VisualWebArena
---
## Usage
You can load this model easily using Hugging Face's Transformers library:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# # default: Load the model on the available device(s)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "gsarch/ViGoRL-Multiturn-MCTS-SFT-7b-Visual-Search", torch_dtype="auto", device_map="auto"
# ) # replace with any of the ViGoRL models
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"gsarch/ViGoRL-Multiturn-MCTS-SFT-7b-Visual-Search",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("gsarch/ViGoRL-Multiturn-MCTS-SFT-7b-Visual-Search")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("gsarch/ViGoRL-Multiturn-MCTS-SFT-7b-Visual-Search", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/image.png",
},
{"type": "text", "text": "QUERY HERE"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text) # this will output a single tool call turn of the model if version is multiturn.
```
**Important**: This model requires a system prompt for proper usage. Please see the model's chat template for details.
---
## Datasets and Training Data
Training datasets and generated reasoning chains are publicly available:
* [Code](https://github.com/Gabesarch/grounded-rl)
* [ViGoRL Datasets on Hugging Face](https://huggingface.co/datasets/gsarch/vigorl_datasets)
---
## Citation
If you use ViGoRL in your research or applications, please cite our paper:
```bibtex
@article{sarch2025vigorl,
title={Grounded Reinforcement Learning for Visual Reasoning},
author={Sarch, Gabriel and Saha, Snigdha and Khandelwal, Naitik and Jain, Ayush and Tarr, Michael J and Kumar, Aviral and Fragkiadaki, Katerina},
year={2025}
}
```
---
## Contact
For questions, feedback, or collaborations, please reach out to Gabriel Sarch or open an issue in our [GitHub repository](https://github.com/Gabesarch/grounded-rl).
--- |
Hectore/tshirt_design | Hectore | 2025-06-23T14:47:13Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-06-23T14:47:03Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0f\0l\0o\0a\0t\0i\0n\0g\0 \0s\0e\0r\0e\0n\0e\0l\0y\0 \0a\0m\0o\0n\0g\0s\0t\0 \0s\0w\0i\0r\0l\0i\0n\0g\0 \0g\0a\0l\0a\0x\0i\0e\0s\0 \0a\0n\0d\0 \0s\0h\0i\0m\0m\0e\0r\0i\0n\0g\0 \0c\0o\0n\0s\0t\0e\0l\0l\0a\0t\0i\0o\0n\0s\0.\0 \0T\0h\0e\0 \0a\0s\0t\0r\0o\0n\0a\0u\0t\0 \0i\0s\0 \0h\0o\0l\0d\0i\0n\0g\0 \0a\0 \0s\0m\0a\0l\0l\0 \0t\0e\0l\0e\0s\0c\0o\0p\0e\0 \0p\0o\0i\0n\0t\0e\0d\0 \0t\0o\0w\0a\0r\0d\0 \0a\0 \0v\0i\0b\0r\0a\0n\0t\0 \0n\0e\0b\0u\0l\0a\0,\0 \0a\0n\0d\0 \0f\0o\0l\0i\0a\0g\0e\0 \0p\0r\0o\0v\0i\0d\0i\0n\0g\0 \0a\0 \0s\0t\0r\0i\0k\0i\0n\0g\0 \0c\0o\0n\0t\0r\0a\0s\0t\0,\0 \0t\0h\0e\0 \0f\0r\0a\0c\0t\0u\0r\0e\0d\0 \0f\0o\0n\0t\0 \0s\0p\0e\0l\0l\0s\0 \0o\0u\0t\0 \x1C\0V\0E\0N\0O\0M \x1D\0 \0w\0i\0t\0h\0 \0s\0c\0a\0t\0t\0e\0r\0e\0d\0 \0d\0e\0b\0r\0i\0s\0,\0 \0s\0y\0m\0b\0o\0l\0i\0z\0i\0n\0g\0 \0t\0h\0e\0 \0w\0a\0r\0r\0i\0o\0r\0'\0s\0 \0i\0n\0d\0o\0m\0i\0t\0a\0b\0l\0e\0 \0s\0p\0i\0r\0i\0t\0.\0 \0T\0h\0e\0 \0E\0n\0g\0l\0i\0s\0h\0 \0w\0o\0r\0d\0 \0\"\0S\0A\0M\0U\0R\0A\0I\0\"\0 \0i\0s\0 \0b\0o\0l\0d\0l\0y\0 \0i\0n\0s\0c\0r\0i\0b\0e\0d\0 \0b\0e\0l\0o\0w\0,\0 \0s\0u\0b\0t\0l\0y\0 \0s\0p\0r\0i\0n\0k\0l\0e\0d\0 \0w\0i\0t\0h\0 \0t\0i\0n\0y\0 \0w\0h\0i\0t\0e\0 \0s\0t\0a\0r\0s\0 \0t\0o\0 \0e\0n\0h\0a\0n\0c\0e\0 \0t\0h\0e\0 \0c\0e\0l\0e\0s\0t\0i\0a\0l\0 \0t\0h\0e\0m\0e\0.\0,\0 \0a\0n\0t\0i\0q\0u\0e\0 \0g\0o\0l\0d\0 \0a\0c\0r\0o\0s\0s\0 \0t\0h\0e\0 \0c\0h\0e\0s\0t\0.\0 \0B\0e\0l\0o\0w\0 \0t\0h\0e\0 \0t\0e\0x\0t\0,\0 \0w\0h\0i\0l\0e\0 \0\"\0R\0e\0t\0r\0o\0 \0V\0i\0b\0e\0s\0\"\0 \0i\0s\0 \0d\0i\0s\0p\0l\0a\0y\0e\0d\0 \0b\0e\0l\0o\0w\0 \0i\0n\0 \0a\0 \0s\0m\0a\0l\0l\0e\0r\0,\0 \0w\0i\0t\0h\0 \0a\0 \0t\0r\0a\0i\0l\0 \0o\0f\0 \0t\0w\0i\0n\0k\0l\0i\0n\0g\0 \0s\0t\0a\0r\0s\0 \0e\0m\0a\0n\0a\0t\0i\0n\0g\0 \0f\0r\0o\0m\0 \0t\0h\0e\0 \0l\0e\0n\0s\0.\0 \0T\0h\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0c\0o\0n\0s\0i\0s\0t\0s\0 \0o\0f\0 \0a\0 \0d\0e\0e\0p\0,\0 \0f\0i\0e\0r\0y\0-\0i\0n\0s\0p\0i\0r\0e\0d\0 \0f\0o\0n\0t\0 \0b\0e\0l\0o\0w\0 \0h\0i\0m\0,\0 \0c\0i\0r\0c\0u\0i\0t\0 \0l\0i\0n\0e\0s\0"
output:
url: images/Tshirt_style_e000005_02_20250623140545.png
- text: "UNICODE\0\0t\0h\0e\0 \0t\0i\0l\0t\0e\0d\0 \0U\0F\0O\0 \0g\0l\0o\0w\0s\0 \0w\0i\0t\0h\0 \0a\0 \0v\0i\0b\0r\0a\0n\0t\0 \0n\0e\0o\0n\0 \0g\0r\0e\0e\0n\0 \0l\0i\0g\0h\0t\0,\0 \0d\0r\0a\0m\0a\0t\0i\0c\0 \0s\0h\0a\0d\0o\0w\0s\0 \0a\0n\0d\0 \0e\0n\0h\0a\0n\0c\0i\0n\0g\0 \0t\0h\0e\0 \0v\0i\0v\0i\0d\0,\0 \0a\0l\0l\0 \0s\0e\0t\0 \0a\0g\0a\0i\0n\0s\0t\0 \0a\0 \0s\0t\0a\0r\0k\0 \0b\0l\0a\0c\0k\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0w\0i\0t\0h\0 \0s\0u\0b\0t\0l\0e\0 \0d\0r\0i\0p\0p\0i\0n\0g\0 \0p\0a\0i\0n\0t\0 \0e\0f\0f\0e\0c\0t\0s\0 \0t\0o\0 \0e\0v\0o\0k\0e\0 \0a\0 \0m\0a\0g\0i\0c\0a\0l\0 \0c\0o\0t\0t\0a\0g\0e\0c\0o\0r\0e\0 \0a\0e\0s\0t\0h\0e\0t\0i\0c\0.\0 \0S\0o\0f\0t\0,\0 \0a\0n\0d\0 \0c\0o\0u\0n\0t\0l\0e\0s\0s\0 \0t\0i\0n\0y\0 \0s\0p\0a\0r\0k\0l\0e\0s\0 \0a\0n\0d\0 \0s\0t\0a\0r\0s\0 \0d\0a\0n\0c\0e\0 \0a\0r\0o\0u\0n\0d\0 \0t\0h\0e\0 \0p\0h\0r\0a\0s\0e\0,\0 \0b\0l\0o\0c\0k\0i\0e\0r\0 \0f\0o\0n\0t\0.\0 \0A\0 \0b\0a\0c\0k\0d\0r\0o\0p\0 \0o\0f\0 \0s\0c\0a\0t\0t\0e\0r\0e\0d\0 \0s\0t\0a\0r\0s\0 \0a\0n\0d\0 \0s\0m\0a\0l\0l\0 \0d\0o\0t\0s\0 \0e\0n\0h\0a\0n\0c\0e\0s\0 \0t\0h\0e\0 \0v\0i\0n\0t\0a\0g\0e\0 \0a\0e\0s\0t\0h\0e\0t\0i\0c\0,\0 \0h\0i\0g\0h\0l\0i\0g\0h\0t\0i\0n\0g\0 \0i\0t\0s\0 \0m\0u\0s\0c\0u\0l\0a\0r\0 \0f\0o\0r\0m\0 \0a\0n\0d\0 \0t\0h\0e\0 \0d\0y\0n\0a\0m\0i\0c\0 \0c\0u\0r\0v\0e\0 \0o\0f\0 \0i\0t\0s\0 \0b\0o\0d\0y\0 \0a\0s\0 \0i\0t\0 \0p\0r\0e\0p\0a\0r\0e\0s\0 \0t\0o\0 \0u\0n\0l\0e\0a\0s\0h\0 \0i\0t\0s\0 \0a\0t\0t\0a\0c\0k\0,\0 \0a\0n\0d\0 \0s\0p\0l\0i\0n\0t\0e\0r\0e\0d\0 \0f\0o\0r\0m\0s\0,\0 \0c\0r\0e\0a\0t\0i\0n\0g\0 \0a\0n\0 \0a\0g\0g\0r\0e\0s\0s\0i\0v\0e\0 \0v\0i\0s\0u\0a\0l\0 \0t\0e\0x\0t\0u\0r\0e\0 \0a\0g\0a\0i\0n\0s\0t\0 \0a\0 \0d\0e\0e\0p\0,\0 \0c\0r\0e\0a\0t\0i\0n\0g\0 \0a\0 \0n\0o\0s\0t\0a\0l\0g\0i\0c\0 \0a\0n\0d\0 \0e\0m\0p\0o\0w\0e\0r\0i\0n\0g\0 \0v\0i\0s\0u\0a\0l\0 \0s\0t\0a\0t\0e\0m\0e\0n\0t\0.\0,\0 \0T\0S\0H\0I\0R\0T\0D\0E\0S\0I\0G\0N\0.\0 \0 \0a\0 \0l\0o\0g\0o\0 \0f\0o\0r\0 \0a\0 \0t\0a\0t\0t\0o\0o\0 \0s\0t\0u\0d\0i\0o\0"
output:
url: images/Tshirt_style_e000005_01_20250623140535.png
- text: "UNICODE\0\0c\0r\0e\0a\0t\0i\0n\0g\0 \0a\0 \0d\0r\0a\0m\0a\0t\0i\0c\0 \0a\0n\0d\0 \0e\0y\0e\0-\0c\0a\0t\0c\0h\0i\0n\0g\0 \0e\0f\0f\0e\0c\0t\0.\0 \0B\0e\0h\0i\0n\0d\0 \0t\0h\0e\0 \0l\0e\0t\0t\0e\0r\0i\0n\0g\0 \0s\0t\0a\0n\0d\0s\0 \0a\0 \0f\0i\0e\0r\0c\0e\0,\0 \0i\0s\0o\0l\0a\0t\0e\0d\0 \0o\0n\0 \0a\0 \0b\0l\0a\0c\0k\0 \0b\0a\0c\0k\0d\0r\0o\0p\0.\0,\0 \0d\0r\0e\0a\0m\0l\0i\0k\0e\0 \0g\0l\0o\0w\0 \0a\0g\0a\0i\0n\0s\0t\0 \0a\0 \0b\0a\0c\0k\0d\0r\0o\0p\0 \0o\0f\0 \0s\0w\0i\0r\0l\0i\0n\0g\0 \0g\0a\0l\0a\0c\0t\0i\0c\0 \0d\0u\0s\0t\0 \0a\0n\0d\0 \0s\0u\0b\0t\0l\0e\0 \0s\0t\0a\0r\0f\0i\0e\0l\0d\0s\0.\0,\0 \0c\0r\0e\0a\0t\0i\0n\0g\0 \0a\0 \0s\0t\0r\0i\0k\0i\0n\0g\0 \0a\0n\0d\0 \0i\0r\0o\0n\0i\0c\0 \0i\0m\0a\0g\0e\0 \0i\0d\0e\0a\0l\0 \0f\0o\0r\0 \0a\0 \0t\0-\0s\0h\0i\0r\0t\0 \0d\0e\0s\0i\0g\0n\0,\0 \0c\0a\0s\0t\0i\0n\0g\0 \0a\0 \0v\0i\0b\0r\0a\0n\0t\0 \0g\0l\0o\0w\0 \0o\0v\0e\0r\0 \0a\0 \0w\0i\0n\0d\0i\0n\0g\0 \0s\0t\0r\0e\0t\0c\0h\0 \0o\0f\0 \0a\0s\0p\0h\0a\0l\0t\0 \0h\0i\0g\0h\0w\0a\0y\0 \0c\0u\0t\0t\0i\0n\0g\0 \0t\0h\0r\0o\0u\0g\0h\0 \0t\0h\0e\0 \0M\0i\0c\0h\0i\0g\0a\0n\0 \0w\0i\0l\0d\0e\0r\0n\0e\0s\0s\0.\0 \0L\0u\0s\0h\0,\0 \0e\0v\0o\0k\0i\0n\0g\0 \0a\0 \01\09\07\00\0s\0 \0a\0e\0s\0t\0h\0e\0t\0i\0c\0.\0 \0T\0h\0e\0 \0w\0o\0r\0d\0s\0 \0\"\0S\0t\0a\0y\0 \0G\0r\0o\0o\0v\0y\0\"\0 \0a\0p\0p\0e\0a\0r\0 \0a\0r\0c\0h\0e\0d\0 \0a\0b\0o\0v\0e\0 \0t\0h\0e\0 \0s\0u\0n\0f\0l\0o\0w\0e\0r\0 \0i\0n\0 \0a\0 \0b\0u\0b\0b\0l\0y\0,\0 \0f\0o\0r\0m\0i\0n\0g\0 \0a\0 \0c\0i\0r\0c\0u\0l\0a\0r\0 \0e\0x\0p\0l\0o\0s\0i\0o\0n\0 \0o\0f\0 \0c\0o\0l\0o\0r\0 \0a\0n\0d\0 \0s\0h\0a\0p\0e\0 \0a\0r\0o\0u\0n\0d\0 \0t\0h\0e\0 \0t\0e\0x\0t\0.\0 \0S\0c\0a\0t\0t\0e\0r\0e\0d\0 \0t\0h\0r\0o\0u\0g\0h\0o\0u\0t\0 \0t\0h\0e\0 \0d\0e\0s\0i\0g\0n\0 \0a\0r\0e\0 \0l\0u\0m\0i\0n\0o\0u\0s\0 \0w\0h\0i\0t\0e\0 \0c\0r\0e\0s\0c\0e\0n\0t\0 \0m\0o\0o\0n\0s\0 \0a\0n\0d\0 \0t\0w\0i\0n\0k\0l\0i\0n\0g\0 \0s\0t\0a\0r\0s\0,\0 \0s\0u\0g\0g\0e\0s\0t\0i\0n\0g\0 \0a\0 \0n\0i\0g\0h\0t\0t\0i\0m\0e\0 \0s\0k\0a\0t\0e\0 \0s\0e\0s\0s\0i\0o\0n\0.\0 \0B\0e\0h\0i\0n\0d\0 \0t\0h\0e\0 \0s\0u\0n\0,\0 \0a\0n\0d\0 \0a\0 \0w\0i\0d\0e\0,\0 \0r\0e\0t\0r\0o\0 \0c\0a\0r\0t\0o\0o\0n\0 \0p\0o\0s\0t\0e\0r\0 \0i\0n\0 \0a\0 \0p\0s\0y\0c\0h\0e\0d\0e\0l\0i\0c\0 \0s\0t\0y\0l\0e\0 \0s\0h\0o\0w\0c\0a\0s\0e\0s\0 \0a\0 \0c\0h\0e\0e\0r\0f\0u\0l\0l\0y\0 \0s\0p\0e\0e\0d\0-\0w\0a\0l\0k\0i\0n\0g\0 \0A\0m\0e\0r\0i\0c\0a\0n\0 \0b\0a\0s\0k\0e\0t\0b\0a\0l\0l\0"
output:
url: images/Tshirt_style_e000005_00_20250623140524.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: TSHIRTDESIGN
license: mit
---
# tshirt design
<Gallery />
## Trigger words
You should use `TSHIRTDESIGN` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Hectore/tshirt_design/tree/main) them in the Files & versions tab.
|
gsarch/ViGoRL-Multiturn-MCTS-SFT-3b-Web-Grounding | gsarch | 2025-06-23T14:47:12Z | 157 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:2505.23678",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-13T21:00:39Z | ---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# ViGoRL: Visually Grounded Reinforcement Learning for Visual Reasoning
This model card describes the ViGoRL (**Vi**sually **G**r**o**unded **R**einforcement **L**earning) model, introduced in our paper ["Grounded Reinforcement Learning for Visual Reasoning"](https://arxiv.org/abs/2505.23678).
**Authors:** Gabriel Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
---
## Model Overview
ViGoRL is a vision-language model fine-tuned using reinforcement learning (RL) to explicitly anchor textual reasoning steps to visual coordinates. Inspired by human visual cognition, ViGoRL employs multi-turn visual grounding, dynamically zooming into image regions to perform fine-grained visual reasoning and grounding.
This model was trained using supervised fine-tuning (SFT) on visually-grounded reasoning traces generated via Monte Carlo Tree Search (MCTS), followed by reinforcement learning with Group Relative Policy Optimization (GRPO).
---
## Model Details
* **Base Architecture:** Qwen2.5-Vision-Language (3B or 7B parameters)
* **Training Paradigm:**
* Supervised Fine-Tuning on MCTS-generated reasoning traces
* Group Relative Policy Optimization (GRPO)
* Multi-turn visual grounding with dynamic zoom-in feedback (if "Multiturn" appears in name)
---
## Use Cases
This model excels in visual reasoning tasks that require precise visual grounding and region-level reasoning. Please see model name for specific domain.
* **Spatial Reasoning:** SAT-2, BLINK, RoboSpatial
* **Visual Search:** V\*Bench
* **Web Interaction and Grounding:** ScreenSpot (Pro and V2), VisualWebArena
---
## Usage
You can load this model easily using Hugging Face's Transformers library:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# # default: Load the model on the available device(s)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "gsarch/ViGoRL-7b-Web-Grounding", torch_dtype="auto", device_map="auto"
# ) # replace with any of the ViGoRL models
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"gsarch/ViGoRL-7b-Web-Grounding",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("gsarch/ViGoRL-7b-Web-Grounding")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("gsarch/ViGoRL-7b-Web-Grounding", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/image.png",
},
{"type": "text", "text": "QUERY HERE"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text) # this will output a single tool call turn of the model if version is multiturn.
```
**Important**: This model requires a system prompt for proper usage. Please see the model's chat template for details.
---
## Datasets and Training Data
Training datasets and generated reasoning chains are publicly available:
* [Code](https://github.com/Gabesarch/grounded-rl)
* [ViGoRL Datasets on Hugging Face](https://huggingface.co/datasets/gsarch/vigorl_datasets)
---
## Citation
If you use ViGoRL in your research or applications, please cite our paper:
```bibtex
@article{sarch2025vigorl,
title={Grounded Reinforcement Learning for Visual Reasoning},
author={Sarch, Gabriel and Saha, Snigdha and Khandelwal, Naitik and Jain, Ayush and Tarr, Michael J and Kumar, Aviral and Fragkiadaki, Katerina},
year={2025}
}
```
---
## Contact
For questions, feedback, or collaborations, please reach out to Gabriel Sarch or open an issue in our [GitHub repository](https://github.com/Gabesarch/grounded-rl).
--- |
JuyeopDang/Qwen-3-14B-Sentence-Ordering | JuyeopDang | 2025-06-23T14:47:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-27T23:54:38Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gsarch/ViGoRL-MCTS-SFT-3b-Web-Grounding | gsarch | 2025-06-23T14:46:57Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:2505.23678",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-13T20:51:10Z | ---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# ViGoRL: Visually Grounded Reinforcement Learning for Visual Reasoning
This model card describes the ViGoRL (**Vi**sually **G**r**o**unded **R**einforcement **L**earning) model, introduced in our paper ["Grounded Reinforcement Learning for Visual Reasoning"](https://arxiv.org/abs/2505.23678).
**Authors:** Gabriel Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
---
## Model Overview
ViGoRL is a vision-language model fine-tuned using reinforcement learning (RL) to explicitly anchor textual reasoning steps to visual coordinates. Inspired by human visual cognition, ViGoRL employs multi-turn visual grounding, dynamically zooming into image regions to perform fine-grained visual reasoning and grounding.
This model was trained using supervised fine-tuning (SFT) on visually-grounded reasoning traces generated via Monte Carlo Tree Search (MCTS), followed by reinforcement learning with Group Relative Policy Optimization (GRPO).
---
## Model Details
* **Base Architecture:** Qwen2.5-Vision-Language (3B or 7B parameters)
* **Training Paradigm:**
* Supervised Fine-Tuning on MCTS-generated reasoning traces
* Group Relative Policy Optimization (GRPO)
* Multi-turn visual grounding with dynamic zoom-in feedback (if "Multiturn" appears in name)
---
## Use Cases
This model excels in visual reasoning tasks that require precise visual grounding and region-level reasoning. Please see model name for specific domain.
* **Spatial Reasoning:** SAT-2, BLINK, RoboSpatial
* **Visual Search:** V\*Bench
* **Web Interaction and Grounding:** ScreenSpot (Pro and V2), VisualWebArena
---
## Usage
You can load this model easily using Hugging Face's Transformers library:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# # default: Load the model on the available device(s)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "gsarch/ViGoRL-7b-Web-Grounding", torch_dtype="auto", device_map="auto"
# ) # replace with any of the ViGoRL models
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"gsarch/ViGoRL-7b-Web-Grounding",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("gsarch/ViGoRL-7b-Web-Grounding")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("gsarch/ViGoRL-7b-Web-Grounding", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/image.png",
},
{"type": "text", "text": "QUERY HERE"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text) # this will output a single tool call turn of the model if version is multiturn.
```
**Important**: This model requires a system prompt for proper usage. Please see the model's chat template for details.
---
## Datasets and Training Data
Training datasets and generated reasoning chains are publicly available:
* [Code](https://github.com/Gabesarch/grounded-rl)
* [ViGoRL Datasets on Hugging Face](https://huggingface.co/datasets/gsarch/vigorl_datasets)
---
## Citation
If you use ViGoRL in your research or applications, please cite our paper:
```bibtex
@article{sarch2025vigorl,
title={Grounded Reinforcement Learning for Visual Reasoning},
author={Sarch, Gabriel and Saha, Snigdha and Khandelwal, Naitik and Jain, Ayush and Tarr, Michael J and Kumar, Aviral and Fragkiadaki, Katerina},
year={2025}
}
```
---
## Contact
For questions, feedback, or collaborations, please reach out to Gabriel Sarch or open an issue in our [GitHub repository](https://github.com/Gabesarch/grounded-rl).
--- |
gsarch/ViGoRL-Multiturn-3b-Web-Grounding | gsarch | 2025-06-23T14:46:42Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:2505.23678",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-13T21:02:32Z | ---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# ViGoRL: Visually Grounded Reinforcement Learning for Visual Reasoning
This model card describes the ViGoRL (**Vi**sually **G**r**o**unded **R**einforcement **L**earning) model, introduced in our paper ["Grounded Reinforcement Learning for Visual Reasoning"](https://arxiv.org/abs/2505.23678).
**Authors:** Gabriel Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
---
## Model Overview
ViGoRL is a vision-language model fine-tuned using reinforcement learning (RL) to explicitly anchor textual reasoning steps to visual coordinates. Inspired by human visual cognition, ViGoRL employs multi-turn visual grounding, dynamically zooming into image regions to perform fine-grained visual reasoning and grounding.
This model was trained using supervised fine-tuning (SFT) on visually-grounded reasoning traces generated via Monte Carlo Tree Search (MCTS), followed by reinforcement learning with Group Relative Policy Optimization (GRPO).
---
## Model Details
* **Base Architecture:** Qwen2.5-Vision-Language (3B or 7B parameters)
* **Training Paradigm:**
* Supervised Fine-Tuning on MCTS-generated reasoning traces
* Group Relative Policy Optimization (GRPO)
* Multi-turn visual grounding with dynamic zoom-in feedback (if "Multiturn" appears in name)
---
## Use Cases
This model excels in visual reasoning tasks that require precise visual grounding and region-level reasoning. Please see model name for specific domain.
* **Spatial Reasoning:** SAT-2, BLINK, RoboSpatial
* **Visual Search:** V\*Bench
* **Web Interaction and Grounding:** ScreenSpot (Pro and V2), VisualWebArena
---
## Usage
You can load this model easily using Hugging Face's Transformers library:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# # default: Load the model on the available device(s)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "gsarch/ViGoRL-7b-Web-Grounding", torch_dtype="auto", device_map="auto"
# ) # replace with any of the ViGoRL models
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"gsarch/ViGoRL-7b-Web-Grounding",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("gsarch/ViGoRL-7b-Web-Grounding")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("gsarch/ViGoRL-7b-Web-Grounding", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/image.png",
},
{"type": "text", "text": "QUERY HERE"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text) # this will output a single tool call turn of the model if version is multiturn.
```
**Important**: This model requires a system prompt for proper usage. Please see the model's chat template for details.
---
## Datasets and Training Data
Training datasets and generated reasoning chains are publicly available:
* [Code](https://github.com/Gabesarch/grounded-rl)
* [ViGoRL Datasets on Hugging Face](https://huggingface.co/datasets/gsarch/vigorl_datasets)
---
## Citation
If you use ViGoRL in your research or applications, please cite our paper:
```bibtex
@article{sarch2025vigorl,
title={Grounded Reinforcement Learning for Visual Reasoning},
author={Sarch, Gabriel and Saha, Snigdha and Khandelwal, Naitik and Jain, Ayush and Tarr, Michael J and Kumar, Aviral and Fragkiadaki, Katerina},
year={2025}
}
```
---
## Contact
For questions, feedback, or collaborations, please reach out to Gabriel Sarch or open an issue in our [GitHub repository](https://github.com/Gabesarch/grounded-rl).
--- |
Subsets and Splits