modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
VoilaRaj/81_g_Z2DiLa
|
VoilaRaj
| 2025-09-18T08:53:00Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T08:52:28Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
aamijar/ReplaceME-Llama-3.1-8B-Instruct-lora-r8-boolq-epochs1
|
aamijar
| 2025-09-18T08:52:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T08:52:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaestroDev19/MentalGemma3-merged
|
MaestroDev19
| 2025-09-18T08:51:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T08:50:26Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** MaestroDev19
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
unsloth/Magistral-Small-2509-unsloth-bnb-4bit
|
unsloth
| 2025-09-18T08:51:02Z | 96 | 2 |
vllm
|
[
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"unsloth",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"arxiv:2506.10910",
"base_model:mistralai/Magistral-Small-2509",
"base_model:quantized:mistralai/Magistral-Small-2509",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-17T12:17:24Z |
---
base_model:
- mistralai/Magistral-Small-2509
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
library_name: vllm
license: apache-2.0
inference: false
tags:
- vllm
- mistral-common
- unsloth
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>Learn to run Magistral 1.2 correctly - <a href="https://docs.unsloth.ai/basics/magistral">Read our Guide</a>.</strong>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves SOTA performance in model quantization.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/magistral">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">✨ How to Use Magistral 1.2:</h1>
</div>
Run in llama.cpp:
```
./llama.cpp/llama-cli -hf unsloth/Magistral-Small-2509-GGUF:UD-Q4_K_XL --jinja --temp 0.7 --top-k -1 --top-p 0.95 -ngl 99
```
Run in Ollama:
```
ollama run hf.co/unsloth/Magistral-Small-2509-GGUF:UD-Q4_K_XL
```
Read our in-depth guide about Magistral 1.2: [docs.unsloth.ai/basics/magistral](https://docs.unsloth.ai/basics/magistral)
- **Fine-tune Magistral 1.2** for free using our [Kaggle notebook here](https://www.kaggle.com/notebooks/welcome?src=https://github.com/unslothai/notebooks/blob/main/nb/Kaggle-Magistral_(24B)-Reasoning-Conversational.ipynb&accelerator=nvidiaTeslaT4)!
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
---
# Magistral Small 1.2
Building upon [Mistral Small 3.2 (2506)](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506), **with added reasoning capabilities**, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
Learn more about Magistral in our [blog post](https://mistral.ai/news/magistral/).
The model was presented in the paper [Magistral](https://huggingface.co/papers/2506.10910).
## Updates compared with [Magistral Small 1.1](https://huggingface.co/mistralai/Magistral-Small-2507)
- **Multimodality**: The model now has a vision encoder and can take multimodal inputs, extending its reasoning capabilities to vision.
- **Performance upgrade**: Magistral Small 1.2 should give you significatively better performance than Magistral Small 1.1 as seen in the [benchmark results](#benchmark-results).
- **Better tone and persona**: You should experiment better LaTeX and Markdown formatting, and shorter answers on easy general prompts.
- **Finite generation**: The model is less likely to enter infinite generation loops.
- **Special think tokens**: [THINK] and [/THINK] special tokens encapsulate the reasoning content in a thinking chunk. This makes it easier to parse the reasoning trace and prevents confusion when the '[THINK]' token is given as a string in the prompt.
- **Reasoning prompt**: The reasoning prompt is given in the system prompt.
## Key Features
- **Reasoning:** Capable of long chains of reasoning traces before providing an answer.
- **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
- **Vision**: Vision capabilities enable the model to analyze images and reason based on visual content in addition to text.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 128k context window. Performance *might* degrade past **40k** but Magistral should still give good results. Hence we recommend to leave the maximum model length to 128k and only lower if you encounter low performance.
## Benchmark Results
| Model | AIME24 pass@1 | AIME25 pass@1 | GPQA Diamond | Livecodebench (v5) |
|--------------------------|---------------|---------------|--------------|--------------------|
| **Magistral Medium 1.2** | **91.82%** | **83.48%** | **76.26%** | **75.00%** |
| Magistral Medium 1.1 | 72.03% | 60.99% | 71.46% | 59.35% |
| Magistral Medium 1.0 | 73.59% | 64.95% | 70.83% | 59.36% |
| **Magistral Small 1.2** | **86.14%** | **77.34%** | **70.07%** | **70.88%** |
| Magistral Small 1.1 | 70.52% | 62.03% | 65.78% | 59.17% |
| Magistral Small 1.0 | 70.68% | 62.76% | 68.18% | 55.84% |
## Sampling parameters
Please make sure to use:
- `top_p`: 0.95
- `temperature`: 0.7
- `max_tokens`: 131072
## Basic Chat Template
We highly recommend including the following system prompt for the best results, you can edit and customise it if needed for your specific use case.
```py
First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.
Your thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response. Use the same language as the input.[/THINK]Here, provide a self-contained response.
```
The `[THINK]` and `[/THINK]` are special tokens that **must** be encoded as such.
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***. Find [below](#usage) examples from libraries supporting `mistral-common`.
We invite you to choose, depending on your use case and requirements, between keeping reasoning traces during multi-turn interactions or keeping only the final assistant response.
Ping model as follows:
<details>
<summary>Python text snippet</summary>
```python
from typing import Any
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
query = "Use each number in 2,5,6,3 exactly once, along with any combination of +, -, ×, ÷ (and parentheses for grouping), to make the number 24."
messages = [
SYSTEM_PROMPT,
{"role": "user", "content": query}
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []
for chunk in stream:
reasoning_content = None
content = None
# Check the content is reasoning_content or content
if hasattr(chunk.choices[0].delta, "reasoning_content"):
reasoning_content = chunk.choices[0].delta.reasoning_content
elif hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if reasoning_content is not None:
if not printed_reasoning_content:
printed_reasoning_content = True
print("Start reasoning:\n", end="", flush=True)
print(reasoning_content, end="", flush=True)
elif content is not None:
# Extract and print the content
if not reasoning_content and printed_reasoning_content:
answer.extend(content)
print(content, end="", flush=True)
if answer:
print("\n\n=============\nAnswer\n=============\n")
print("".join(answer))
else:
print("\n\n=============\nNo Answer\n=============\n")
print("No answer was generated by the model, probably because the maximum number of tokens was reached.")
# client: Start streaming chat completions...:
#
# Start reasoning:
# First, I need to ...
# ...
#
#
# =============
# Answer
# =============
#
# Here's one way to use the numbers 2, 5, 6, 3 to make 24:
#
#\[
#(6 \div 2) \times (5 + 3) = 3 \times 8 = 24
#\]
#
#Alternatively, another solution is:
#
#\[
#6 \times (5 - 3 + 2) = 6 \times 4 = 24
#\]
#
#Both expressions use each of the numbers 2, 5, 6, 3 exactly once with the operations given.
```
</details>
<details>
<summary>Python text-image snippet: Pokemon</summary>
```python
from typing import Any
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
model_id = "mistralai/Magistral-Small-2509"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
messages = [
SYSTEM_PROMPT,
{
"role": "user",
"content": [
{
"type": "text",
"text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []
for chunk in stream:
reasoning_content = None
content = None
# Check the content is reasoning_content or content
if hasattr(chunk.choices[0].delta, "reasoning_content"):
reasoning_content = chunk.choices[0].delta.reasoning_content
elif hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if reasoning_content is not None:
if not printed_reasoning_content:
printed_reasoning_content = True
print("Start reasoning:\n", end="", flush=True)
print(reasoning_content, end="", flush=True)
elif content is not None:
# Extract and print the content
if not reasoning_content and printed_reasoning_content:
answer.extend(content)
print(content, end="", flush=True)
if answer:
print("\n\n=============\nAnswer\n=============\n")
print("".join(answer))
else:
print("\n\n=============\nNo Answer\n=============\n")
print(
"No answer was generated by the model, probably because the maximum number of tokens was reached."
)
# client: Start streaming chat completions...:
# Start reasoning:
# In the image, we see a battle scene from a Pokémon game. The player's Pikachu is at full health (83/83 HP), and the opponent's Pidgey is at a lower level (level 17 compared to Pikachu's level 42). The possible actions available to the player are:
# 1. FIGHT: This allows the player to use one of Pikachu's moves to attack Pidgey. Given that Pikachu is at a higher level and has full HP, it is likely that Pikachu would be able to defeat Pidgey easily. This is a good option because it could potentially win the battle quickly and efficiently.
# 2. BAG: This allows the player to use an item from their bag. This could be useful if the player wants to heal Pikachu (though it's not necessary at full health) or use an item to weaken Pidgey. However, since Pikachu is at full health and Pidgey is at a lower level, this might not be necessary. It could be a good option if the player wants to use a special item, but generally, it might not be the best choice in this situation.
# 3. POKÉMON: This allows the player to switch the current Pokémon to another one in their team. Since Pikachu is at full health and at a higher level than Pidgey, switching might not be necessary. It could be useful if the player wants to train a different Pokémon, but it might not be the most efficient choice for winning the battle quickly.
# 4. RUN: This allows the player to flee from the battle. This could be a good option if the player wants to avoid the battle, but since Pikachu is at a clear advantage, running would not be the most efficient choice. It could be useful if the player wants to save time or if they are trying to avoid losing a Pokémon, but in this case, it seems unnecessary.
# Given the circumstances, the best action seems to be to FIGHT, as Pikachu is at a clear advantage in terms of level and health. The other options are not as efficient for winning the battle quickly.In the given scenario, the most appropriate action to take is to FIGHT. Here's why:
# 1. FIGHT: This is the best option because Pikachu is at a higher level and has full health, making it likely to defeat Pidgey quickly and efficiently. Using an attack move would be the most straightforward way to win the battle.
# 2. BAG: While this option could be useful for healing or using special items, it is not necessary since Pikachu is already at full health. This option is less efficient for winning the battle quickly.
# 3. POKÉMON: Switching to another Pokémon might be useful for training a different Pokémon, but it is not necessary since Pikachu is at a clear advantage. This option is not as efficient for winning the current battle.
# 4. RUN: Fleeing from the battle could be useful if the player wants to avoid the battle, but since Pikachu is at a clear advantage, running would not be the most efficient choice. It could be useful if the player wants to save time or avoid losing a Pokémon, but in this case, it seems unnecessary.
# Therefore, the best action to take in this situation is to FIGHT.
# FIGHT
# =============
# Answer
# =============
# In the given scenario, the most appropriate action to take is to FIGHT. Here's why:
# 1. FIGHT: This is the best option because Pikachu is at a higher level and has full health, making it likely to defeat Pidgey quickly and efficiently. Using an attack move would be the most straightforward way to win the battle.
# 2. BAG: While this option could be useful for healing or using special items, it is not necessary since Pikachu is already at full health. This option is less efficient for winning the battle quickly.
# 3. POKÉMON: Switching to another Pokémon might be useful for training a different Pokémon, but it is not necessary since Pikachu is at a clear advantage. This option is not as efficient for winning the current battle.
# 4. RUN: Fleeing from the battle could be useful if the player wants to avoid the battle, but since Pikachu is at a clear advantage, running would not be the most efficient choice. It could be useful if the player wants to save time or avoid losing a Pokémon, but in this case, it seems unnecessary.
# Therefore, the best action to take in this situation is to FIGHT.
# FIGHT
```
</details>
<details>
<summary>Python text-image snippet: Geo trivia</summary>
```python
from typing import Any
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
model_id = "mistralai/Magistral-Small-2509"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d7/201806_Tianducheng_Bird-eye_View.jpg/1280px-201806_Tianducheng_Bird-eye_View.jpg"
messages = [
SYSTEM_PROMPT,
{
"role": "user",
"content": [
{
"type": "text",
"text": "Where has this picture been taken ?",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []
for chunk in stream:
reasoning_content = None
content = None
# Check the content is reasoning_content or content
if hasattr(chunk.choices[0].delta, "reasoning_content"):
reasoning_content = chunk.choices[0].delta.reasoning_content
elif hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if reasoning_content is not None:
if not printed_reasoning_content:
printed_reasoning_content = True
print("Start reasoning:\n", end="", flush=True)
print(reasoning_content, end="", flush=True)
elif content is not None:
# Extract and print the content
if not reasoning_content and printed_reasoning_content:
answer.extend(content)
print(content, end="", flush=True)
if answer:
print("\n\n=============\nAnswer\n=============\n")
print("".join(answer))
else:
print("\n\n=============\nNo Answer\n=============\n")
print(
"No answer was generated by the model, probably because the maximum number of tokens was reached."
)
# client: Start streaming chat completions...:
# Start reasoning:
# The image shows a replica of the Eiffel Tower, but it's not in Paris. The background includes mountains, which are not present in Paris. The surrounding architecture appears to be more modern and dense, which is also not typical of Paris. The combination of the Eiffel Tower replica and the mountainous backdrop suggests that this is likely in a city in China, as China has several replicas of the Eiffel Tower, with the most famous one being in Shanghai. However, the dense residential buildings and the specific layout suggest that this might be in another city in China, possibly Shenzhen or another major city with a similar landscape.
# Given that the question is about identifying the location based on the visual clues, and considering the presence of the Eiffel Tower replica and the mountainous backdrop, it's likely that this is a well-known location in China.
# The most probable answer is that this is in Shenzhen, as it has a well-known Eiffel Tower replica in a park, but to be precise, this is the Eiffel Tower replica in Shenzhen, which is known as the "Shenzhen Park of Eiffel Tower."
# However, to be more accurate, this is likely the Eiffel Tower replica in Shenzhen, as it matches the description and visual elements.The image shows a replica of the Eiffel Tower, which is not in Paris but rather in a city with a mountainous backdrop and modern, dense architecture. This combination of elements is typical of a Chinese city, and the presence of the Eiffel Tower replica suggests a location like Shenzhen, which is known for having such a replica. The dense residential buildings and the specific layout further support this identification. Therefore, the most probable location for this image is Shenzhen, China.
# So, the answer is:
# Shenzhen
# =============
# Answer
# =============
# The image shows a replica of the Eiffel Tower, which is not in Paris but rather in a city with a mountainous backdrop and modern, dense architecture. This combination of elements is typical of a Chinese city, and the presence of the Eiffel Tower replica suggests a location like Shenzhen, which is known for having such a replica. The dense residential buildings and the specific layout further support this identification. Therefore, the most probable location for this image is Shenzhen, China.
# So, the answer is:
# Shenzhen
```
</details>
<details>
<summary>Python text-image snippet: Maths</summary>
```python
from typing import Any
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
model_id = "mistralai/Magistral-Small-2509"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://i.ytimg.com/vi/5Y3xLHeyKZU/hqdefault.jpg"
messages = [
SYSTEM_PROMPT,
{
"role": "user",
"content": [
{
"type": "text",
"text": "Solve the equations. Answer in the language of the image.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...:\n")
printed_reasoning_content = False
answer = []
for chunk in stream:
reasoning_content = None
content = None
# Check the content is reasoning_content or content
if hasattr(chunk.choices[0].delta, "reasoning_content"):
reasoning_content = chunk.choices[0].delta.reasoning_content
elif hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if reasoning_content is not None:
if not printed_reasoning_content:
printed_reasoning_content = True
print("Start reasoning:\n", end="", flush=True)
print(reasoning_content, end="", flush=True)
elif content is not None:
# Extract and print the content
if not reasoning_content and printed_reasoning_content:
answer.extend(content)
print(content, end="", flush=True)
if answer:
print("\n\n=============\nAnswer\n=============\n")
print("".join(answer))
else:
print("\n\n=============\nNo Answer\n=============\n")
print(
"No answer was generated by the model, probably because the maximum number of tokens was reached."
)
# client: Start streaming chat completions...:
# Start reasoning:
# Je dois résoudre ce système d'équations. Voici les équations :
# 1. \(5x + 2y = -2\)
# 2. \(3x - 4y = 17\)
# D'abord, je pense que la méthode d'élimination pourrait être une bonne approche. Pour cela, je dois éliminer une des variables. Voyons comment.
# Je vais essayer d'éliminer y. Pour cela, je dois que les coefficients de y soient les mêmes (en valeur absolue) dans les deux équations.
# Le coefficient de y dans la première équation est 2, et dans la deuxième, c'est -4. Le plus petit multiple commun de 2 et 4 est 4. Donc, je vais multiplier la première équation par 2 pour que le coefficient de y devienne 4.
# Faisons cela :
# 1. \(2 \times (5x + 2y) = 2 \times (-2)\)
# Ce qui donne : \(10x + 4y = -4\)
# Maintenant, les équations sont :
# 1. \(10x + 4y = -4\)
# 2. \(3x - 4y = 17\)
# Maintenant, si j'additionne ces deux équations, les termes avec y s'annuleront.
# Faisons l'addition :
# \( (10x + 4y) + (3x - 4y) = -4 + 17 \)
# Ce qui donne : \(13x = 13\)
# Donc, \(x = 1\).
# Maintenant que nous avons x, nous pouvons le substituer dans une des équations originales pour trouver y. Utilisons la première équation originale :
# \(5x + 2y = -2\)
# En substituant x = 1 :
# \(5(1) + 2y = -2\)
# Ce qui donne : \(5 + 2y = -2\)
# Soustraire 5 des deux côtés :
# ...
# Ce qui donne : \(5 + 2y = -2\)
# Soustraire 5 des deux côtés :
# \(2y = -2 - 5\)
# \(2y = -7\)
# Diviser par 2 :
# \(y = -\frac{7}{2}\)
# Donc, la solution est \(x = 1\) et \(y = -\frac{7}{2}\).
# $\boxed{x = 1,\ y = -\frac{7}{2}}$
# =============
# Answer
# =============
# Pour résoudre le système d'équations donné :
# 1. \(5x + 2y = -2\)
# 2. \(3x - 4y = 17\)
# Nous commençons par utiliser la méthode d'élimination pour éliminer une des variables. Nous choisissons d'éliminer \(y\) en rendant ses coefficients identiques en valeur absolue. Le coefficient de \(y\) dans la première équation est 2, et dans la deuxième, c'est -4. Le plus petit multiple commun de 2 et 4 est 4. Nous multiplions donc la première équation par 2 pour que le coefficient de \(y\) devienne 4.
# Faisons cela :
# 1. \(2 \times (5x + 2y) = 2 \times (-2)\)
# Ce qui donne : \(10x + 4y = -4\)
# Maintenant, les équations sont :
# 1. \(10x + 4y = -4\)
# 2. \(3x - 4y = 17\)
# En additionnant ces deux équations, les termes avec \(y\) s'annuleront :
# \( (10x + 4y) + (3x - 4y) = -4 + 17 \)
# Ce qui donne : \(13x = 13\)
# Donc, \(x = 1\).
# Ensuite, nous substituons \(x = 1\) dans la première équation originale pour trouver \(y\) :
# \(5(1) + 2y = -2\)
# Ce qui donne : \(5 + 2y = -2\)
# Soustraire 5 des deux côtés :
# \(2y = -2 - 5\)
# \(2y = -7\)
# Diviser par 2 :
# \(y = -\frac{7}{2}\)
# Donc, la solution est \(x = 1\) et \(y = -\frac{7}{2}\).
# $\boxed{x = 1,\ y = -\frac{7}{2}}$
```
</details>
### Transformers
Make sure you install the latest [`Transformers`](https://github.com/huggingface/transformers/) version:
```sh
pip install --upgrade transformers[mistral-common]
```
This should also install [`mistral_common >= 1.8.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.8.5)
To check:
```sh
python -c "import mistral_common; print(mistral_common.__version__)"
```
Now you can use Transformers with Magistral:
<details>
<summary>Python snippet</summary>
```python
from typing import Any
import torch
from huggingface_hub import hf_hub_download
from transformers import Mistral3ForConditionalGeneration
from transformers import AutoTokenizer
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
model_id = "mistralai/Magistral-Small-2509"
tokenizer = AutoTokenizer.from_pretrained(model_id, tokenizer_type="mistral")
model = Mistral3ForConditionalGeneration.from_pretrained(
model_id, torch_dtype=torch.bfloat16, device_map="auto"
).eval()
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
messages = [
SYSTEM_PROMPT,
{
"role": "user",
"content": [
{
"type": "text",
"text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
tokenized = tokenizer.apply_chat_template(messages, return_dict=True)
input_ids = torch.tensor(tokenized.input_ids, device="cuda").unsqueeze(0)
attention_mask = torch.tensor(tokenized.attention_mask, device="cuda").unsqueeze(0)
pixel_values = torch.tensor(
tokenized.pixel_values[0], dtype=torch.bfloat16, device="cuda"
).unsqueeze(0)
image_sizes = torch.tensor(pixel_values.shape[-2:], device="cuda").unsqueeze(0)
with torch.inference_mode():
output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
pixel_values=pixel_values,
image_sizes=image_sizes,
)[0]
decoded_output = tokenizer.decode(
output[
len(tokenized.input_ids) : (
-1 if output[-1] == tokenizer.eos_token_id else len(output)
)
]
)
print(decoded_output)
# [THINK]Alright, let's analyze the image carefully. It's a scene from a Pokémon game. The player is controlling Pikachu, which is at level 42 with full HP (83/83). The opponent is a Pidgey at level 17. The question is asking what action the player should take in this situation.
# First, let's list all the possible actions available. From the bottom of the screen, the options are:
# 1. FIGHT
# 2. BAG
# 3. POKÉMON
# 4. RUN
# Now, let's consider each option:
# 1. **FIGHT**: This means using Pikachu's moves to attack the Pidgey.
# - Pros: Pikachu is at a higher level (42) compared to Pidgey (17), so it has a significant advantage. Pikachu's HP is full, so it's in good condition to fight. Fighting could potentially win the battle quickly.
# - Cons: Even though Pikachu is stronger, there's always a risk of Pidgey landing a lucky hit or using a powerful move. However, given the level difference, this is less likely.
# 2. **BAG**: This means using items from the bag to help in the battle.
# - Pros: Could use a potion to heal (though Pikachu is already at full HP), or use another item like a Poké Ball to try and catch Pidgey.
# - Cons: Using items might be less efficient than just fighting, especially since Pikachu is already at full health. Also, if the goal is to catch Pidgey, using items to weaken it first might be useful, but the immediate advantage isn't clear.
# 3. **POKÉMON**: This means switching to another Pokémon from the team.
# - Pros: If the player has another Pokémon that is stronger or has moves that are super effective against Pidgey, this could be useful.
# - Cons: Pikachu is already at a significant level advantage and is at full health, so switching might not be necessary unless there's a strategic reason (e.g., leveling up another Pokémon).
# 4. **RUN**: This means attempting to flee from the battle.
# - Pros: If the player wants to avoid the battle for some reason (e.g., saving time, or wanting to catch Pidgey without weakening it), running could be useful.
# - Cons: If the goal is to catch or defeat Pidgey, running would not help in achieving that. Also, running might not always be successful, especially in a grass encounter where the chance to flee might be lower.
# Now, considering the context, if the goal is to defeat Pidgey, the best action would be to FIGHT since Pikachu is at a significant advantage in terms of level and health. Using the BAG could be useful if the goal is to catch Pidgey, as items like Poké Balls or berries could be used. Switching Pokémon doesn't seem necessary given Pikachu's current status. Running would only be useful if the player wants to avoid the battle, which seems unlikely if the goal is to progress in the game.
# However, the question asks for the best action without specifying the goal. Assuming the goal is to defeat Pidgey efficiently, the best action would be to FIGHT.
# But to list all possible actions and their pros and cons:
# 1. **FIGHT**
# - Pros: High chance of winning quickly due to level and health advantage.
# - Cons: Small risk of Pikachu getting damaged.
# 2. **BAG**
# - Pros: Could use items to catch Pidgey or support Pikachu (though not necessary for healing).
# - Cons: Might be less efficient if the goal is just to defeat Pidgey.
# 3. **POKÉMON**
# - Pros: Could switch to another Pokémon for strategic reasons (e.g., type advantage, training a different Pokémon).
# - Cons: Pikachu is already at a strong position, so switching might be unnecessary unless there's another goal.
# 4. **RUN**
# - Pros: Escape the battle if needed.
# - Cons: Does not contribute to progress if the goal is to defeat or catch Pidgey.
# Given that, if the goal is to defeat Pidgey, the best action is to FIGHT. If the goal is to catch Pidgey, using the BAG to throw a Poké Ball could be the best action.[/THINK]Here are the possible actions and their explanations:
# 1. **FIGHT**
# - **Pros**: Pikachu is at a much higher level (42) compared to Pidgey (17) and has full HP. This makes it very likely to win quickly.
# - **Cons**: There's always a small risk that Pidgey could land a lucky hit, but this is unlikely given the level difference.
# 2. **BAG**
# - **Pros**: You could use items to catch Pidgey (e.g., Poké Ball) or support Pikachu (though Pikachu is already at full HP).
# - **Cons**: Using items might be less efficient if the goal is simply to defeat Pidgey, as fighting directly could be quicker.
# 3. **POKÉMON**
# - **Pros**: You could switch to another Pokémon that might have a type advantage or that you want to train.
# - **Cons**: Pikachu is already in a strong position to defeat Pidgey, so switching might not be necessary unless there's another strategic reason.
# 4. **RUN**
# - **Pros**: You can escape the battle if you need to, for example, if you want to preserve Pikachu's health for a tougher battle ahead.
# - **Cons**: Running doesn't help you progress if your goal is to defeat or catch Pidgey. Additionally, the success rate for running might be lower in a grass encounter.
# Given these considerations, if your goal is to defeat Pidgey, the best action is likely to **FIGHT**, as Pikachu is at a significant advantage. If your goal is to catch Pidgey, using the **BAG** to throw a Poké Ball could be the best choice. If you're looking to train a different Pokémon, you might consider switching with **POKÉMON**, and if you need to preserve resources or Pikachu's health, **RUN** could be an option.
```
</details>
|
MaestroDev19/MentalGemma3
|
MaestroDev19
| 2025-09-18T08:49:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T08:49:24Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MaestroDev19
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758185267
|
schooncestiaa
| 2025-09-18T08:48:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T08:48:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csikasote/mms-1b-all-bemgen-combined-m50f100-42-DAT-9e-1
|
csikasote
| 2025-09-18T08:48:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-18T07:50:45Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m50f100-42-DAT-9e-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m50f100-42-DAT-9e-1
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2652
- Cer: 0.0739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 7.6547 | 0.5618 | 100 | 3.0165 | 1.0 |
| 2.3907 | 1.1236 | 200 | 0.6884 | 0.1367 |
| 1.3392 | 1.6854 | 300 | 0.3308 | 0.0927 |
| 1.1187 | 2.2472 | 400 | 0.3074 | 0.0864 |
| 1.1061 | 2.8090 | 500 | 0.2908 | 0.0823 |
| 1.0248 | 3.3708 | 600 | 0.2912 | 0.0816 |
| 1.0062 | 3.9326 | 700 | 0.2812 | 0.0784 |
| 0.9889 | 4.4944 | 800 | 0.2816 | 0.0793 |
| 0.9504 | 5.0562 | 900 | 0.2794 | 0.0811 |
| 0.9366 | 5.6180 | 1000 | 0.2832 | 0.0779 |
| 0.9515 | 6.1798 | 1100 | 0.2751 | 0.0765 |
| 0.942 | 6.7416 | 1200 | 0.2753 | 0.0764 |
| 0.8794 | 7.3034 | 1300 | 0.2709 | 0.0747 |
| 0.8816 | 7.8652 | 1400 | 0.2671 | 0.0737 |
| 0.8237 | 8.4270 | 1500 | 0.2671 | 0.0752 |
| 0.84 | 8.9888 | 1600 | 0.2652 | 0.0739 |
| 0.7831 | 9.5506 | 1700 | 0.2637 | 0.0739 |
| 0.7857 | 10.1124 | 1800 | 0.2615 | 0.0740 |
| 0.8769 | 10.6742 | 1900 | 0.2619 | 0.0734 |
| 0.8319 | 11.2360 | 2000 | 0.2628 | 0.0738 |
| 0.8581 | 11.7978 | 2100 | 0.2622 | 0.0739 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
BootesVoid/cmfp4cy750bs5x0n0uaaotdzw_cmfp4i8uq0bscx0n0vqmu2fme_2
|
BootesVoid
| 2025-09-18T08:42:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-18T08:42:56Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MODELXYZ
---
# Cmfp4Cy750Bs5X0N0Uaaotdzw_Cmfp4I8Uq0Bscx0N0Vqmu2Fme_2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MODELXYZ` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MODELXYZ",
"lora_weights": "https://huggingface.co/BootesVoid/cmfp4cy750bs5x0n0uaaotdzw_cmfp4i8uq0bscx0n0vqmu2fme_2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmfp4cy750bs5x0n0uaaotdzw_cmfp4i8uq0bscx0n0vqmu2fme_2', weight_name='lora.safetensors')
image = pipeline('MODELXYZ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmfp4cy750bs5x0n0uaaotdzw_cmfp4i8uq0bscx0n0vqmu2fme_2/discussions) to add images that show off what you’ve made with this LoRA.
|
BootesVoid/cmfp4cy750bs5x0n0uaaotdzw_cmfp4i8uq0bscx0n0vqmu2fme
|
BootesVoid
| 2025-09-18T08:41:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-18T08:41:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MODELXYZ
---
# Cmfp4Cy750Bs5X0N0Uaaotdzw_Cmfp4I8Uq0Bscx0N0Vqmu2Fme
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MODELXYZ` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MODELXYZ",
"lora_weights": "https://huggingface.co/BootesVoid/cmfp4cy750bs5x0n0uaaotdzw_cmfp4i8uq0bscx0n0vqmu2fme/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmfp4cy750bs5x0n0uaaotdzw_cmfp4i8uq0bscx0n0vqmu2fme', weight_name='lora.safetensors')
image = pipeline('MODELXYZ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmfp4cy750bs5x0n0uaaotdzw_cmfp4i8uq0bscx0n0vqmu2fme/discussions) to add images that show off what you’ve made with this LoRA.
|
amphora/r1-orpo-2e-5
|
amphora
| 2025-09-18T08:41:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T23:27:26Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: r1-orpo-2e-5
tags:
- generated_from_trainer
- axolotl
- trl
- orpo
licence: license
---
# Model Card for r1-orpo-2e-5
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="amphora/r1-orpo-2e-5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/guijinson/fastcampus/runs/bg7m5cut)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Thireus/Kimi-K2-Instruct-0905-THIREUS-IQ1_M-SPECIAL_SPLIT
|
Thireus
| 2025-09-18T08:40:23Z | 0 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2025-09-18T08:23:49Z |
---
license: mit
---
# Kimi-K2-Instruct-0905
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Kimi-K2-Instruct-0905-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Kimi-K2-Instruct-0905 model (official repo: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/Kimi-K2-Instruct-0905/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_harmonized_recipes/Kimi-K2-Instruct-0905.ROOT-1.9968bpw-3.8128ppl.238GB-GGUF_10GB-GPU_228GB-CPU.90e3c2f_4766d51.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-cli:
ulimit -n 9999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m Kimi-K2-Instruct-0905-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-mla 3 -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
*All PPL values are computed with the parameters `-ctk f16 -c 512 -b 4096 -ub 4096`. Changing any of these parameters will alter the PPL. In particular, reducing `-b 4096 -ub 4096` increases the PPL, while increasing them decreases the PPL.*
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_1e-07
|
joanna302
| 2025-09-18T08:38:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"unsloth",
"arxiv:2305.18290",
"base_model:joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05",
"base_model:finetune:joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T08:26:30Z |
---
base_model: joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05
library_name: transformers
model_name: Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_1e-07
tags:
- generated_from_trainer
- trl
- dpo
- unsloth
licence: license
---
# Model Card for Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_1e-07
This model is a fine-tuned version of [joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05](https://huggingface.co/joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_1e-07", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_en_alpaca_SFT_8e-05_fr_pt_zh_ar_without_en_DPO_1e-07/runs/xgpdz2lj)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VoilaRaj/81_g_otGjz6
|
VoilaRaj
| 2025-09-18T08:37:35Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T08:37:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tikauuk/blockassist
|
tikauuk
| 2025-09-18T08:35:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"small aquatic moose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T07:42:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- small aquatic moose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758184225
|
schooncestiaa
| 2025-09-18T08:31:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T08:31:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dhdbsrlw/OSPO-Unitok-MLLM-7B
|
dhdbsrlw
| 2025-09-18T08:31:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mini_gemini",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T08:30:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Ice0.115-10.05-RP-GGUF
|
mradermacher
| 2025-09-18T08:29:19Z | 160 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:icefog72/Ice0.115-10.05-RP",
"base_model:quantized:icefog72/Ice0.115-10.05-RP",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T02:08:50Z |
---
base_model: icefog72/Ice0.115-10.05-RP
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/icefog72/Ice0.115-10.05-RP
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Ice0.115-10.05-RP-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Ice0.115-10.05-RP-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Ice0.115-10.05-RP-GGUF/resolve/main/Ice0.115-10.05-RP.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
VoilaRaj/81_g_hGYk0D
|
VoilaRaj
| 2025-09-18T08:27:47Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T08:27:16Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
coastalcph/Llama-2-7b-chat-1t_gsm8k-0.5t_hh_diff_alpaca_375exs
|
coastalcph
| 2025-09-18T08:25:31Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-18T08:23:03Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4")
t_2 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs")
t_combined = 1.0 * t_1 + 0.5 * t_2 - 0.5 * t_3
new_model = t_combined.apply_to("meta-llama/Llama-2-7b-chat-hf", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "meta-llama/Llama-2-7b-chat-hf",
"finetuned_model1": "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4",
"finetuned_model2": "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs",
"finetuned_model3": "coastalcph/Llama-2-7b-chat-helpful-alpaca-375exs",
"output_model_name": "coastalcph/Llama-2-7b-chat-1t_gsm8k-0.5t_hh_diff_alpaca_375exs",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 0.5,
"scale_t3": 0.5
}
|
Kowshik24/bangla-sentence-transformer-ft-matryoshka-distiluse-base-multilingual-cased-v1
|
Kowshik24
| 2025-09-18T08:25:10Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"distilbert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:10134",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"bn",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:sentence-transformers/distiluse-base-multilingual-cased-v1",
"base_model:finetune:sentence-transformers/distiluse-base-multilingual-cased-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T08:24:43Z |
---
language:
- bn
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:10134
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/distiluse-base-multilingual-cased-v1
widget:
- source_sentence: গুড় আখ কিংবা খেজুরের রস হতে তৈরি করা এক প্রকারের মিষ্টদ্রব্য।
তালের রস হতেও গুড় তৈরি করা হয়। আখ, খেজুর এবং তাল গাছের রস ঘন করে পাক দিয়ে গুড়
তৈরি করা হয়। গুড় প্রধানত ৩ প্রকার; ঝোলাগুড়, পাটালিগুড়, চিটাগুড়। প্রথমে আখের
বা খেজুরের রস একটি বড় খোলা পাত্রে ছেঁকে রাখা হয়। পরে সময় নিয়ে, বড় একটি চুলায়
তা জ্বাল দিতে হয়, এতে জলীয় অংশ বাষ্প হয়ে যায়। ধীরে ধীরে রসের রং লালচে হতে
শুরু করে এবং টেনে আসে। এরপর এই উত্তপ্ত রস শীতল করা হয়, অবশেষে গুড় পাওয়া যাবে।
এ রসকে গুড় না বানিয়ে চিনিও বানানো যায়। গুড় চিনির থেকে কম মিষ্টি হলেও বেশি
পুষ্টিকর। চিনির জন্য দুবার ফোটালে ঘন কালচে একটু তিতকুটে ভেলি গুড় পড়ে থাকে। আরো
বেশি বার চিনি বের করে নিলে থাকে চিটে গুড় , যার মধ্য প্রচুর ভিটামিন থাকলেও তেতো
বলে সাধারণত গরুকে খাওয়ানো হয়। বাংলাদেশে গুড় দিয়ে পিঠা, পায়েস ইত্যাদি সুস্বাদু
নাস্তা তৈরি করা হয়। গুড়ের সন্দেশ এদেশে একটি অত্যন্ত জনপ্রিয় মিষ্টান্ন।
sentences:
- কোন কোন গাছের রস থেকে গুড় তৈরি হয়?
- ও’নিলের প্রকল্পটির কোনও পূর্বসূরি আছে কি?
- ট্রানজিস্টর উদ্ভাবন করেন কারা?
- source_sentence: রিকার্দো এলিয়েসের নেফতালি রেইয়েস বাসোয়ালতো ১৯০৪ সালের ১২ই জুলাই
চিলির সান্তিয়াগোর ৩৫০ কিমি দক্ষিণের লিনারেস প্রদেশের (বর্তমান বৃহত্তর মাউলে অঞ্চল)
পাররাল শহরে জন্মগ্রহণ করেন। তার পিতা হোসে দেল কারমেন রেইয়েস মোরেলস একজন রেলওয়ের
কর্মকর্তা এবং মাতা রোসা নেফতালি বাসোয়ালতো ওপাজো একজন বিদ্যালয়ের শিক্ষিকা ছিলেন।
তার মাতা তার জন্মের দুমাস পর মৃত্যুবরণ করেন। তার মৃত্যুর পর পরই রেইয়েস তেমুকোতে
পাড়ি জমান। তিনি সেখানে ত্রিনিদাদ কানদিয়া মালভারদে নামক একটি মহিলাকে বিয়ে করেন,
যার পূর্বে নয় বছর বয়সী রোদোলফো দে লা রোসা নামক একজন পুত্রসন্তান ছিল। নেরুদা
তেমুকোতে তার সৎভাই রোদোলফো এবং আউরেইয়া তোলরা নামক একজন কাতালান মহিলার সাথে তার
পিতার বিবাহ-বহির্ভূত সম্পর্কের ফলে জন্ম নেওয়া সৎবোন লরা হেরমিনিয়া "লরিতা"র সাথে
বেড়ে ওঠেন। তিনি তার প্রথম দিকের কবিতাগুলো ১৯১৪ সালের শীতকালে রচনা করেছিলেন। নেরুদা
একজন নাস্তিক।
sentences:
- চাঁদ কি পৃথিবীর একমাত্র প্রাকৃতিক উপগ্রহ ?
- পাবলো নেরুদার মাতা কখন মৃত্যুবরণ করেন?
- ডার্ক ওয়েব এবং তাদের সেবা সম্পর্কে কাভারেজ এবং ব্যবহারিক তথ্য প্রদান করে কোন
কোন সংবাদপত্র?
- source_sentence: 'বিনোদন হচ্ছে অবসরের কোনও কর্ম, অবসর যা বিশ্লেষণমূলক সময়। "অবসর
বিনোদনের জন্য কোনও কিছু করার চাহিদা" একটি অপরিহার্য উপাদান, মানব জীববিদ্যা ও মনোবিজ্ঞানের
জন্য। বিনোদনমূলক কার্যক্রম প্রায়ই করা হয় রমণ, বিনোদন, বা আনন্দ এর জন্যে।
বিনোদন শব্দটি প্রথম ইংরেজিতে ব্যবহৃত হয় সম্ভবত ১৪ শতকের শেষের দিকে, প্রথম অর্থ,
"জলখাবার বা আরোগ্যকরণ একজন অসুস্থ ব্যক্তির", মোড় প্রাপ্ত করল লাতিন। মানবজাতি
তাদের সময় কাটায় দৈনিক জীবনজাপনের কার্যক্রমে, কাজে, নিদ্রায়, সামাজিক কর্তব্যে,
এবং বিনোদনে।'
sentences:
- বিনোদন শব্দটি প্রথম ইংরেজিতে ব্যবহৃত হয় কখন?
- '১৯৭০ খ্রিষ্টাব্দের ১২ই নভেম্বর ভোলায় ঘূর্ণিঝড়ের কারণে প্রায় কত লক্ষ মানুষের
মৃত্যু হয়? '
- ক্রিকেটীয় পরিভাষাবিশেষ ও খেলার পয়েন্ট সংগ্রহকারী একক কী?
- source_sentence: আয়িশা বিনতে আবু বকর ছিলেন ইসলামী নবী মুহাম্মদের স্ত্রীগণের মধ্যে
একজন। তিনি ছিলেন তার তৃতীয় স্ত্রী। ইসলামের ঐতিহ্য অনুসারে, তাকে "উম্মুল মু'মিনিন"
বা "বিশ্বাসীদের মাতা" হিসেবে আখ্যায়িত করা হয়। মুসলিম সম্প্রদায় তাকে মুহাম্মদের
স্ত্রী হিসেবে অত্যন্ত সম্মান ও শ্রদ্ধা করে থাকেন। এছাড়া ইসলামের ঐতিহ্যগত ইতিহাসেও
তার অবদান অনস্বীকার্য এবং অত্যন্ত গুরুত্বপূর্ণ। আয়িশা ৬১৩ খ্রিষ্টাব্দের শেষের
দিকে মতান্তরে ৬১৪ খ্রিষ্টাব্দের প্রথম দিকে জন্মগ্রহণ করেন। আবু বকর তার পিতা, যিনি
মুহাম্মদের অত্যন্ত বিশ্বস্ত একজন সাহাবী ও সহচর ছিলেন। তার পিতার নাম আবু বকর ও
মাতার নাম উম্মে রুমান বিনতে আমির। মুহাম্মাদের সঙ্গে আয়িশার বিয়ে হয় মূলত খাদিজা
বিনতে খুয়ালিদ এর মৃত্যুর পরে। মুহাম্মদ সওদাকে (যাম'আ ইবনে কাঈসের কন্যা) বিয়ে
করার পর আয়িশাকে পরবর্তীতে তৃতীয় স্ত্রী হিসেবে গ্রহণ করেন। তার বিয়ে খাদিজার
মৃত্যুর পরে হয়েছিল, এ পক্ষে বেশিরভাগ গবেষকই একই মত পোষণ করেন। যদিও, তার বিয়ে
হিজরতের দুই না তিন বছর আগে হয়েছিল, এ নিয়ে ভিন্নমত প্রচলিত আছে। কিছু সুত্র থেকে
পাওয়া যায় যে মুহাম্মদের সঙ্গে তার বিবাহ্ সওদার সঙ্গে বিয়ের পূর্বে হয়েছিল৷
যদিও বেশিরভাগ হাদীস মোতাবেক, মুহাম্মদ সওদাকে আয়িশার পূর্বে বিয়ে করেছিলেন। এটি
প্রচলিত যে, উসমান বিন মা'যুনের স্ত্রী খাওলা আবু বকরের নিকট দেখা করতে গিয়েছিলেন
এবং এই বিয়ের প্রস্তাব দিয়েছিলেন। মুহাম্মদের সঙ্গে ছয় বা সাত বছর বয়সে আয়িশার
বিয়ে হয়। বয়সের দিক থেকে তিনি ছিলেন মুহাম্মাদের স্ত্রীদের মাঝে কনিষ্ঠতম।
sentences:
- ইসলামের ঐতিহ্য অনুসারে আয়িশা বিনতে আবু বকর কে কী হিসেবে আখ্যায়িত করা হয় ?
- "নিকাশ ঘর কোন ব্যাংকে দেখা যায়?\t"
- রানী এলিজাবেথের কাছে বাংলাদেশের কোন দই পাঠানো হয়?
- source_sentence: ফিরখো ১৮৪৮ সালের বিপ্লবে অংশগ্রহণ করেন। যার ফলে পরবর্তী বছর তিনি
শারিতে থেকে বহিষ্কৃত হন। তিনি অতঃপর ডাই মেডিজিনিশ্চে রিফর্ম (চিকিৎসাবিজ্ঞান সংক্রান্ত
সংস্কার) নামে একটি পত্রিকা প্রকাশ করেন। তিনি ১৮৪৯ সালে ভুর্জবুর্গ বিশ্ববিদ্যালয়ের
রোগবৈজ্ঞানিক শারীরবিদ্যা বিভাগের প্রথম সভাপতি হন। পাঁচ বছর পরে শারিতে হাসপাতাল
তাকে সদ্য প্রতিষ্ঠিত রোগবিজ্ঞান ইনস্টিটিউটের সভাপতি হিসেবে নিয়োগ দেয়। তিনি "ডয়েচে
ফোর্টরিশপার্টেই"(প্রগ্রেস পার্টি) নামে একটি রাজনৈতিক দল প্রতিষ্ঠা করেন। তিনি প্রুশীয়
হাউজ অব রিপ্রেজেন্টেটিভসের সদস্যপদে নির্বাচিত হন। এছাড়াও তিনি রাইখস্ট্যাগে একটি
আসনে জয়লাভ করেন। অটো ফন বিসমার্কের অর্থনৈতিক নীতির প্রতি তার বিরোধিতা "সসেজ সংঘাত"
বা সসেজ ডুয়েলে রূপ নেয়। বিসমার্ককে ক্যাথলিকবিরোধী প্রচারণায় অবশ্য তিনি সাহায্য
করেন, যাকে তিনি "কুলটুরকাম্ফ" বা সাংস্কৃতিক যুদ্ধ নাম দেন।
sentences:
- রুডল্ফ লুডভিগ কার্ল ফিরখো কেন শারিতে থেকে বহিষ্কৃত হন?
- 'উপজেলার প্রশাসনিক দায়িত্বে কোন পরিষদ নিয়োজিত? '
- ' সফলতার সম্ভাবনা বিশ্লেষণে কি কি উপাদান গণনায় ধরতে হয়?'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: Bangla Sentence Transformer FT Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.3790035587188612
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3807829181494662
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4359430604982206
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.4697508896797153
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3790035587188612
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.37781731909845784
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.30587188612099653
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.17695729537366547
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.10105278766310793
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3009341637010676
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.39808718861209963
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.4592230130486358
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4213024969320639
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.39548840591990014
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4248196712422387
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.3692170818505338
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3718861209964413
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4306049822064057
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.46441281138790036
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3692170818505338
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3683274021352313
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.300355871886121
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.174288256227758
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.09845788849347567
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.29337188612099646
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3909400948991696
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.452505931198102
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4134809797152565
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.38683485849855914
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.41567384942573227
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.3620996441281139
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.36298932384341637
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4199288256227758
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.44217081850533807
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3620996441281139
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.36061684460260973
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.29466192170818506
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.16699288256227757
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.09623368920521945
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2862544483985765
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3830367734282325
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.4338671411625148
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4003933270205118
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3774268485567414
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4047528696853651
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.3202846975088968
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3238434163701068
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.3834519572953737
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.41548042704626337
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3202846975088968
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3208778173190984
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2656583629893239
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.15622775800711744
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.08494958481613285
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2543149466192171
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.34509193357058127
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.40606465005931197
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.366206019196373
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.33890089250409505
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.36752543606074617
name: Cosine Map@100
---
# Bangla Sentence Transformer FT Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1). It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) <!-- at revision 826fee3d516ebb14987355af373f5b69101c7006 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** bn
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'DistilBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Kowshik24/bangla-sentence-transformer-ft-matryoshka-distiluse-base-multilingual-cased-v1")
# Run inference
sentences = [
'ফিরখো ১৮৪৮ সালের বিপ্লবে অংশগ্রহণ করেন। যার ফলে পরবর্তী বছর তিনি শারিতে থেকে বহিষ্কৃত হন। তিনি অতঃপর ডাই মেডিজিনিশ্চে রিফর্ম (চিকিৎসাবিজ্ঞান সংক্রান্ত সংস্কার) নামে একটি পত্রিকা প্রকাশ করেন। তিনি ১৮৪৯ সালে ভুর্জবুর্গ বিশ্ববিদ্যালয়ের রোগবৈজ্ঞানিক শারীরবিদ্যা বিভাগের প্রথম সভাপতি হন। পাঁচ বছর পরে শারিতে হাসপাতাল তাকে সদ্য প্রতিষ্ঠিত রোগবিজ্ঞান ইনস্টিটিউটের সভাপতি হিসেবে নিয়োগ দেয়। তিনি "ডয়েচে ফোর্টরিশপার্টেই"(প্রগ্রেস পার্টি) নামে একটি রাজনৈতিক দল প্রতিষ্ঠা করেন। তিনি প্রুশীয় হাউজ অব রিপ্রেজেন্টেটিভসের সদস্যপদে নির্বাচিত হন। এছাড়াও তিনি রাইখস্ট্যাগে একটি আসনে জয়লাভ করেন। অটো ফন বিসমার্কের অর্থনৈতিক নীতির প্রতি তার বিরোধিতা "সসেজ সংঘাত" বা সসেজ ডুয়েলে রূপ নেয়। বিসমার্ককে ক্যাথলিকবিরোধী প্রচারণায় অবশ্য তিনি সাহায্য করেন, যাকে তিনি "কুলটুরকাম্ফ" বা সাংস্কৃতিক যুদ্ধ নাম দেন।',
'রুডল্ফ লুডভিগ কার্ল ফিরখো কেন শারিতে থেকে বহিষ্কৃত হন?',
'উপজেলার প্রশাসনিক দায়িত্বে কোন পরিষদ নিয়োজিত? ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6753, 0.0945],
# [0.6753, 1.0000, 0.0579],
# [0.0945, 0.0579, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.379 |
| cosine_accuracy@3 | 0.3808 |
| cosine_accuracy@5 | 0.4359 |
| cosine_accuracy@10 | 0.4698 |
| cosine_precision@1 | 0.379 |
| cosine_precision@3 | 0.3778 |
| cosine_precision@5 | 0.3059 |
| cosine_precision@10 | 0.177 |
| cosine_recall@1 | 0.1011 |
| cosine_recall@3 | 0.3009 |
| cosine_recall@5 | 0.3981 |
| cosine_recall@10 | 0.4592 |
| **cosine_ndcg@10** | **0.4213** |
| cosine_mrr@10 | 0.3955 |
| cosine_map@100 | 0.4248 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3692 |
| cosine_accuracy@3 | 0.3719 |
| cosine_accuracy@5 | 0.4306 |
| cosine_accuracy@10 | 0.4644 |
| cosine_precision@1 | 0.3692 |
| cosine_precision@3 | 0.3683 |
| cosine_precision@5 | 0.3004 |
| cosine_precision@10 | 0.1743 |
| cosine_recall@1 | 0.0985 |
| cosine_recall@3 | 0.2934 |
| cosine_recall@5 | 0.3909 |
| cosine_recall@10 | 0.4525 |
| **cosine_ndcg@10** | **0.4135** |
| cosine_mrr@10 | 0.3868 |
| cosine_map@100 | 0.4157 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3621 |
| cosine_accuracy@3 | 0.363 |
| cosine_accuracy@5 | 0.4199 |
| cosine_accuracy@10 | 0.4422 |
| cosine_precision@1 | 0.3621 |
| cosine_precision@3 | 0.3606 |
| cosine_precision@5 | 0.2947 |
| cosine_precision@10 | 0.167 |
| cosine_recall@1 | 0.0962 |
| cosine_recall@3 | 0.2863 |
| cosine_recall@5 | 0.383 |
| cosine_recall@10 | 0.4339 |
| **cosine_ndcg@10** | **0.4004** |
| cosine_mrr@10 | 0.3774 |
| cosine_map@100 | 0.4048 |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 64
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3203 |
| cosine_accuracy@3 | 0.3238 |
| cosine_accuracy@5 | 0.3835 |
| cosine_accuracy@10 | 0.4155 |
| cosine_precision@1 | 0.3203 |
| cosine_precision@3 | 0.3209 |
| cosine_precision@5 | 0.2657 |
| cosine_precision@10 | 0.1562 |
| cosine_recall@1 | 0.0849 |
| cosine_recall@3 | 0.2543 |
| cosine_recall@5 | 0.3451 |
| cosine_recall@10 | 0.4061 |
| **cosine_ndcg@10** | **0.3662** |
| cosine_mrr@10 | 0.3389 |
| cosine_map@100 | 0.3675 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,134 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 90 tokens</li><li>mean: 127.85 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 25.98 tokens</li><li>max: 82 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা। ফাজিল পরীক্ষা বাংলাদেশে ডিগ্রি সমমানের, কখনো স্নাতক সমমানের একটি পরীক্ষা, যা একটি ফাজিল মাদ্রাসায় অনুষ্ঠিত হয়ে থাকে। তবে ভারতে ফাজিল পরীক্ষাকে উচ্চ মাধ্যমিক শ্রেণীর (১১ বা ১২ ক্লাস) মান বলে বিবেচিত করা হয়। ফাজিল পরীক্ষা বাংলাদেশ ভারত ও পাকিস্তানের সরকারি স্বীকৃত আলিয়া মাদরাসায় প্রচলিত রয়েছে। বাংলাদেশের ফাজিল পরীক্ষা ইসলামি আরবি বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়ে থাকে ও ভারতের ফাজিল পরীক্ষা পশ্চিমবঙ্গ মাদ্রাসা শিক্ষা পর্ষদের অধীনে অনুষ্ঠিত হয়ে থাকে।<br><br>১৯৪৭ সালে ঢাকা আলিয়া মাদ্রাসা ঢাকায় স্থানান্তরের পূর্বে বাংলাদেশ ও ভারতের ফাজিল পরীক্ষা কলকাতা আলিয়া মাদ্রাসার অধীনে অনুষ্ঠিত হতো। ফাযিল পরীক্ষা বর্তমানে ইসলামি আরবী বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়। যা পূর্বে মাদরাসা বোর্ড ও ইসলামি বিশ্ববিদ্যালয়ের আধীনে অনুষ্ঠিত হত। মাদ্রাসা-ই-আলিয়া ঢাকায় স্থানান্তরিত হলে ১৯৪৮ সালে মাদ্রাসা বোর্ডের ফাজিলগুলো পরীক্ষা ঢাকা বিশ্ববিদ্যালয় কর্তৃক গৃহীত হতো। ১৯৭৫ সালের কুদরত-এ-খুদা শিক্ষা কমিশনের সুপারিশে মাদ্রাসা ...</code> | <code>ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা ?</code> |
| <code>ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা। ফাজিল পরীক্ষা বাংলাদেশে ডিগ্রি সমমানের, কখনো স্নাতক সমমানের একটি পরীক্ষা, যা একটি ফাজিল মাদ্রাসায় অনুষ্ঠিত হয়ে থাকে। তবে ভারতে ফাজিল পরীক্ষাকে উচ্চ মাধ্যমিক শ্রেণীর (১১ বা ১২ ক্লাস) মান বলে বিবেচিত করা হয়। ফাজিল পরীক্ষা বাংলাদেশ ভারত ও পাকিস্তানের সরকারি স্বীকৃত আলিয়া মাদরাসায় প্রচলিত রয়েছে। বাংলাদেশের ফাজিল পরীক্ষা ইসলামি আরবি বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়ে থাকে ও ভারতের ফাজিল পরীক্ষা পশ্চিমবঙ্গ মাদ্রাসা শিক্ষা পর্ষদের অধীনে অনুষ্ঠিত হয়ে থাকে।<br><br>১৯৪৭ সালে ঢাকা আলিয়া মাদ্রাসা ঢাকায় স্থানান্তরের পূর্বে বাংলাদেশ ও ভারতের ফাজিল পরীক্ষা কলকাতা আলিয়া মাদ্রাসার অধীনে অনুষ্ঠিত হতো। ফাযিল পরীক্ষা বর্তমানে ইসলামি আরবী বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়। যা পূর্বে মাদরাসা বোর্ড ও ইসলামি বিশ্ববিদ্যালয়ের আধীনে অনুষ্ঠিত হত। মাদ্রাসা-ই-আলিয়া ঢাকায় স্থানান্তরিত হলে ১৯৪৮ সালে মাদ্রাসা বোর্ডের ফাজিলগুলো পরীক্ষা ঢাকা বিশ্ববিদ্যালয় কর্তৃক গৃহীত হতো। ১৯৭৫ সালের কুদরত-এ-খুদা শিক্ষা কমিশনের সুপারিশে মাদ্রাসা ...</code> | <code>কত সালে ঢাকা আলিয়া মাদ্রাসা ঢাকায় স্থানান্তরের পূর্বে বাংলাদেশ ও ভারতের ফাজিল পরীক্ষা কলকাতা আলিয়া মাদ্রাসার অধীনে অনুষ্ঠিত হতো ?</code> |
| <code>ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা। ফাজিল পরীক্ষা বাংলাদেশে ডিগ্রি সমমানের, কখনো স্নাতক সমমানের একটি পরীক্ষা, যা একটি ফাজিল মাদ্রাসায় অনুষ্ঠিত হয়ে থাকে। তবে ভারতে ফাজিল পরীক্ষাকে উচ্চ মাধ্যমিক শ্রেণীর (১১ বা ১২ ক্লাস) মান বলে বিবেচিত করা হয়। ফাজিল পরীক্ষা বাংলাদেশ ভারত ও পাকিস্তানের সরকারি স্বীকৃত আলিয়া মাদরাসায় প্রচলিত রয়েছে। বাংলাদেশের ফাজিল পরীক্ষা ইসলামি আরবি বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়ে থাকে ও ভারতের ফাজিল পরীক্ষা পশ্চিমবঙ্গ মাদ্রাসা শিক্ষা পর্ষদের অধীনে অনুষ্ঠিত হয়ে থাকে।<br><br>১৯৪৭ সালে ঢাকা আলিয়া মাদ্রাসা ঢাকায় স্থানান্তরের পূর্বে বাংলাদেশ ও ভারতের ফাজিল পরীক্ষা কলকাতা আলিয়া মাদ্রাসার অধীনে অনুষ্ঠিত হতো। ফাযিল পরীক্ষা বর্তমানে ইসলামি আরবী বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়। যা পূর্বে মাদরাসা বোর্ড ও ইসলামি বিশ্ববিদ্যালয়ের আধীনে অনুষ্ঠিত হত। মাদ্রাসা-ই-আলিয়া ঢাকায় স্থানান্তরিত হলে ১৯৪৮ সালে মাদ্রাসা বোর্ডের ফাজিলগুলো পরীক্ষা ঢাকা বিশ্ববিদ্যালয় কর্তৃক গৃহীত হতো। ১৯৭৫ সালের কুদরত-এ-খুদা শিক্ষা কমিশনের সুপারিশে মাদ্রাসা ...</code> | <code>মাদ্রাসা-ই-আলিয়া ঢাকায় স্থানান্তরিত হলে কত সালে মাদ্রাসা বোর্ডের ফাজিলগুলো পরীক্ষা ঢাকা বিশ্ববিদ্যালয় কর্তৃক গৃহীত হতো ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.6309 | 50 | 11.8259 | - | - | - | - |
| 1.0 | 80 | - | 0.3698 | 0.3540 | 0.3263 | 0.2924 |
| 1.2524 | 100 | 7.9382 | - | - | - | - |
| 1.8833 | 150 | 6.1184 | - | - | - | - |
| 2.0 | 160 | - | 0.3968 | 0.3962 | 0.3733 | 0.3494 |
| 2.5047 | 200 | 4.5834 | - | - | - | - |
| 3.0 | 240 | - | 0.4154 | 0.4096 | 0.3933 | 0.3589 |
| 3.1262 | 250 | 4.0591 | - | - | - | - |
| 3.7571 | 300 | 3.5441 | - | - | - | - |
| **4.0** | **320** | **-** | **0.4213** | **0.4135** | **0.4004** | **0.3662** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
cemoss17/ingredient-gram-grpo-vl-3b
|
cemoss17
| 2025-09-18T08:24:07Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-17T06:35:53Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: ingredient-gram-grpo-vl-3b
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for ingredient-gram-grpo-vl-3b
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cemoss17/ingredient-gram-grpo-vl-3b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cemoss17-sciscore/huggingface/runs/a9au7567)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VoilaRaj/81_g_6375QK
|
VoilaRaj
| 2025-09-18T08:22:49Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T08:22:18Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
sreenathsree1578/facial_emotion
|
sreenathsree1578
| 2025-09-18T08:22:26Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-18T08:11:55Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
DazzlingXeno/Mira-v1.2-dpo-27B-Q4_K_S-GGUF
|
DazzlingXeno
| 2025-09-18T08:22:15Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:CyberNative/Code_Vulnerability_Security_DPO",
"dataset:nbeerbower/GreatFirewall-DPO",
"dataset:nbeerbower/synthetic-fiction-dpo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:Lambent/Mira-v1.2-dpo-27B",
"base_model:quantized:Lambent/Mira-v1.2-dpo-27B",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T08:20:12Z |
---
license: gemma
datasets:
- CyberNative/Code_Vulnerability_Security_DPO
- nbeerbower/GreatFirewall-DPO
- nbeerbower/synthetic-fiction-dpo
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
base_model: Lambent/Mira-v1.2-dpo-27B
tags:
- llama-cpp
- gguf-my-repo
---
# DazzlingXeno/Mira-v1.2-dpo-27B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Lambent/Mira-v1.2-dpo-27B`](https://huggingface.co/Lambent/Mira-v1.2-dpo-27B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Lambent/Mira-v1.2-dpo-27B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DazzlingXeno/Mira-v1.2-dpo-27B-Q4_K_S-GGUF --hf-file mira-v1.2-dpo-27b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DazzlingXeno/Mira-v1.2-dpo-27B-Q4_K_S-GGUF --hf-file mira-v1.2-dpo-27b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DazzlingXeno/Mira-v1.2-dpo-27B-Q4_K_S-GGUF --hf-file mira-v1.2-dpo-27b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DazzlingXeno/Mira-v1.2-dpo-27B-Q4_K_S-GGUF --hf-file mira-v1.2-dpo-27b-q4_k_s.gguf -c 2048
```
|
ekatosha/finetuned_model
|
ekatosha
| 2025-09-18T08:21:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dslim/bert-base-NER",
"base_model:finetune:dslim/bert-base-NER",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-09-18T08:17:02Z |
---
library_name: transformers
license: mit
base_model: dslim/bert-base-NER
tags:
- generated_from_trainer
model-index:
- name: finetuned_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_model
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 4.6827 | 0.0759 | 0.0513 | 0.0612 | 0.4236 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.2
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Dryze/fine_tuned_loraW2
|
Dryze
| 2025-09-18T08:19:43Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T08:19:35Z |
---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
library_name: transformers
model_name: fine_tuned_loraW2
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for fine_tuned_loraW2
This model is a fine-tuned version of [unsloth/llama-3.2-1b-instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-1b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Dryze/fine_tuned_loraW2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/duta2711-universitas-gadjah-mada-library/huggingface/runs/yikbi1vv)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Heoni/KONI-gemma-3-4b-cpt-it-dpo_ko-r1-YiSang_16k_wo_packing_4e-5_20250918_5ep
|
Heoni
| 2025-09-18T08:19:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T08:17:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cjkasbdkjnlakb/agent-0918-xml-5k
|
cjkasbdkjnlakb
| 2025-09-18T08:17:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"lora",
"transformers",
"conversational",
"dataset:custom",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T08:16:33Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
- axolotl
- base_model:adapter:Qwen/Qwen3-4B-Instruct-2507
- lora
- transformers
datasets:
- custom
pipeline_tag: text-generation
model-index:
- name: checkpoints/0918-xml-5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.2`
```yaml
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
# 是否以 8-bit 精度加载模型
load_in_8bit: false
# 是否以 4-bit 精度加载模型(与QLoRA绑定, 强制使用)
load_in_4bit: false
# 是否严格匹配模型结构,关闭表示可加载少部分差异结构(如以适配 adapter)
# strict: false
base_model: Qwen/Qwen3-4B-Instruct-2507
# 数据集设置
chat_template: qwen3
datasets:
- path: /workspace/train_dir/tool_and_retrieval_agent_train_data_xml_5k.json # - 表示列表(list)中的一项, 即可以同时使用多个数据集
type: chat_template # chat_template(自定义格式) alpaca
roles_to_train: ["assistant"]
field_messages: messages # 标识的字段
message_property_mappings: # message_property_mappings={'role':'role', 'content':'content'})
role: role
content: content
dataset_prepared_path:
val_set_size: 0.05
output_dir: checkpoints/0918-xml-5k
sequence_len: 16384 # 模型所能处理的最大上下文长度(默认2048)
pad_to_sequence_len: true
# context_parallel_size: 2 # 长序列拆分至多个GPU(强制要求 mirco_batch_size: 1)
sample_packing: false # 在训练时将多个样本拼接(packing)成一个长序列(sequence_len)输入到模型中,以提高训练效率。
eval_sample_packing: false # 评估时拼接多个样本
# 训练超参数
adapter: lora # lora qlora
lora_model_dir:
lora_r: 16 # lora_r默认首选 16,平衡精度与显存
lora_alpha: 64 # 缩放系数,用于控制 LoRA 的影响力, 一般设为 2*r 或 4*r
lora_dropout: 0.05
lora_target_linear: true
micro_batch_size: 4 # 微批次大小 94G的H100可以设为4(Token为1w)
gradient_accumulation_steps: 2 # 梯度累积: 将多个微批次的梯度(micro_batch_size)累积起来,然后更新模型权重 有效 Batch 常取 16: 小于 8 训练会抖,大于 32 只会更耗时、收益有限
auto_find_batch_size: false # 允许Axolotl不断调整batch_size ⚠️Zero-3不适用
num_epochs: 1
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 4e-5
# bf16: auto + tf32: true,可获得更好的稳定性和性能。
bf16: auto
tf32: true
# early_stopping_patience:
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
# auto_resume_from_checkpoints: true #自动从output_dir寻找最新checkpoint断点恢复
logging_steps: 1
flash_attention: true
warmup_steps: 50
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.01
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false # H200显存足够,无需offload
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: Qwen3DecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
```
</details><br>
# checkpoints/0918-xml-5k
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the /workspace/train_dir/tool_and_retrieval_agent_train_data_xml_5k.json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0774
- Memory/max Mem Active(gib): 128.99
- Memory/max Mem Allocated(gib): 128.8
- Memory/device Mem Reserved(gib): 130.32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 149
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) |
|:-------------:|:------:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:|
| No log | 0 | 0 | 1.0392 | 98.27 | 98.07 | 99.43 |
| 0.1116 | 0.2559 | 38 | 0.1399 | 128.99 | 128.8 | 130.32 |
| 0.1148 | 0.5118 | 76 | 0.0879 | 128.99 | 128.8 | 130.32 |
| 0.0577 | 0.7677 | 114 | 0.0774 | 128.99 | 128.8 | 130.32 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ManasMittal2005/Qwen-2.5-7B-secure-code
|
ManasMittal2005
| 2025-09-18T08:15:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T19:37:35Z |
---
library_name: transformers
model_name: Qwen-2.5-7B-secure-code
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Qwen-2.5-7B-secure-code
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ManasMittal2005/Qwen-2.5-7B-secure-code", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/manas-mittal-iiit-hyderabad/clarifying-em/runs/b9vf38dj)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.2
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
alxzend/qwen2-7b-instruct-amazon-description
|
alxzend
| 2025-09-18T08:13:46Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T07:29:20Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-VL-7B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: qwen2-7b-instruct-amazon-description
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-7b-instruct-amazon-description
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.8.0+cu129
- Datasets 3.0.1
- Tokenizers 0.20.3
|
JoungheeKim/SenseVoiceSmallM5
|
JoungheeKim
| 2025-09-18T08:12:18Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-09-18T08:08:52Z |
---
license: artistic-2.0
---
|
hammh0a/Hala-1.2B-EN-AR-Translator
|
hammh0a
| 2025-09-18T08:07:59Z | 2 | 0 | null |
[
"safetensors",
"lfm2",
"ar",
"arxiv:2509.14008",
"base_model:LiquidAI/LFM2-1.2B",
"base_model:finetune:LiquidAI/LFM2-1.2B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-09-13T15:52:49Z |
---
language:
- ar
base_model:
- LiquidAI/LFM2-1.2B
license: cc-by-nc-4.0
---
# Hala-1.2B-EN-AR-Translator
<p align="center">
<img src="https://i.ibb.co/pvhp1XfJ/halalogo.png" alt="Hala logo" width="450" />
</p>
**Paper**: *Hala Technical Report: Building Arabic-Centric Instruction & Translation Models at Scale*
**Authors**: Hasan Abed Al Kader Hammoud\*, Mohammad Zbeeb\*, Bernard Ghanem
**Affiliation**: King Abdullah University of Science and Technology (KAUST)
\*Equal contribution
---
## 📖 Overview
The **Hala-1.2B-EN-AR-Translator** is a lightweight translation model fine-tuned for **English → Arabic** translation, particularly in **instruction-style and conversational contexts**.
It powers the creation of the **Hala dataset** and can also be used as a standalone translator for research, dataset generation, or preprocessing tasks.
---
## 🔧 Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "hammh0a/Hala-1.2B-EN-AR-Translator"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype="auto", device_map="auto"
)
pipe = pipeline("text-generation", model=model, tokenizer=tok)
# Example English text
text = "Physics is the study of matter, energy, and the interactions between them."
messages = [
{
"role": "user",
"content": "Translate everything that follows into Arabic:\n\n" + text,
}
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
out = pipe(prompt, max_new_tokens=256, do_sample=False)
print(out[0]["generated_text"])
```
---
## EN→AR Translation Quality on 500 Sampled MMLU Questions
| **System** | **BLEU ↑** | **ROUGE-L ↑** | **chrF++ ↑** |
|------------|------------|---------------|--------------|
| *Teacher translator* | | | |
| CohereLabs/command-a-translate-08-2025 (FP16) | 53.1 | 26.0 | 68.6 |
| **hammh0a/command-a-translate-FP8-Dynamic** | 53.5 (+0.3) | 26.0 (+0.0) | 68.9 (+0.3) |
| *Lightweight translator (LFM2-1.2B family)* | | | |
| LiquidAI/LFM2-1.2B (base) | 16.0 | 19.3 | 43.2 |
| **Our LFM2-1.2B Translator (ours)** | 48.2 (+32.1) | 25.1 (+5.9) | 64.2 (+21.0) |
---
## 📚 Citation
If you use **Hala-1.2B-EN-AR-Translator**, please cite:
Link: https://arxiv.org/abs/2509.14008
```bibtex
@misc{hammoud2025halatechnicalreportbuilding,
title={Hala Technical Report: Building Arabic-Centric Instruction & Translation Models at Scale},
author={Hasan Abed Al Kader Hammoud and Mohammad Zbeeb and Bernard Ghanem},
year={2025},
url={https://arxiv.org/abs/2509.14008},
}
```
|
inclusionAI/Ling-mini-base-2.0-20T
|
inclusionAI
| 2025-09-18T08:06:36Z | 36 | 7 | null |
[
"safetensors",
"bailing_moe",
"custom_code",
"arxiv:2507.17702",
"license:mit",
"region:us"
] | null | 2025-09-08T14:44:37Z |
---
license: mit
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
## Introduction
Today, we are excited to announce the open-sourcing of __Ling 2.0__ — a family of MoE-based large language models that combine __SOTA performance__ with __high efficiency__.
The first released version, Ling-mini-2.0, is compact yet powerful. It has __16B total parameters__, but only __1.4B__ are activated per input token (non-embedding 789M). Trained on more than __20T tokens__ of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
### Strong General and Professional Reasoning
We evaluated Ling-mini-2.0 on challenging general reasoning tasks in coding (LiveCodeBench, CodeForces) and mathematics (AIME 2025, HMMT 2025), as well as knowledge-intensive reasoning tasks across multiple domains (MMLU-Pro, Humanity's Last Exam). Compared with sub-10B dense models (e.g., Qwen3-4B-instruct-2507, Qwen3-8B-nothinking) and larger-scale MoE models (Ernie-4.5-21B-A3B-PT, GPT-OSS-20B/low), Ling-mini-2.0 demonstrated outstanding overall reasoning capabilities.
### 7× Equivalent Dense Performance Leverage
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation ratio__ MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over __7× equivalent dense performance__. In other words, __Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 7–8B dense model__.
### High-speed Generation at 300+ token/s
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), __Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)__ — more than __2× faster__ than an 8B dense model. Ling-mini-2.0 is able to handle __128K context length__ with YaRN, as sequence length increases, the relative speedup can reach __over 7×__.
<p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
### Open-sourced FP8 Efficient Training Solution
Ling 2.0 employs __FP8 mixed-precision training__ throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our __FP8 training solution__. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, __Ling-mini-2.0 achieved 30–60% throughput gains with MTP enabled, and 90–120% throughput gains with MTP disabled__.
### A More Open Opensource Strategy
We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training — achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing __five pretraining checkpoints__: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
## Model Downloads
You can download the following table to see the various stage of Ling-mini-2.0 models(1.43B activated of 16.26B total params). If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
<center>
| **Model** | **Context Length** | **Download** |
|:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
| Ling-mini-base-2.0-5T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
| Ling-mini-base-2.0-10T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
| Ling-mini-base-2.0-15T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
| Ling-mini-base-2.0-20T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
| Ling-mini-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
</center>
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
## Quickstart
### Convert to safetensors
Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
If you want to train your model and eval it, you can convert from dcp produced by training.
```shell
python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
```
Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
- `--force-bf16` for BF16 format.
- `--force-fp8` for FP8 format.
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-mini-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### 🤖 ModelScope
If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
## Deployment
### vLLM
vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
#### Environment Preparation
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
```bash
git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .
```
#### Offline Inference:
```bash
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-mini-2.0")
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
llm = LLM(model="inclusionAI/Ling-mini-2.0", dtype='bfloat16')
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
```
#### Online Inference:
```bash
vllm serve inclusionAI/Ling-mini-2.0 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--use-v2-block-manager \
--gpu-memory-utilization 0.90
```
To handle long context in vLLM using YaRN, we need to follow these two steps:
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
### SGLang
#### Environment Preparation
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
```shell
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
```
You can use docker image as well:
```shell
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
```
Then you should apply patch to sglang installation:
```shell
# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
```
#### Run Inference
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
- Start server:
```shell
python -m sglang.launch_server \
--model-path $MODLE_PATH \
--host 0.0.0.0 --port $PORT \
--trust-remote-code \
--attention-backend fa3
```
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
to start command.
- Client:
```shell
curl -s http://localhost:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
"""
```
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
## Training
We also provide a complete and efficient training framework that covers both pre-training and finetune. Based on this framework, continue training can be performed on the Ling-mini-2.0 checkpoint. With our training framework, the training throughput of the Ling-mini-2.0 model is significantly better than that of the existing Dense 8B model (Qwen3-8B, Llama3-8B).
### Pre-training
[Pretraining demo](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md) to Continue pretraining Ling models.
#### Performance Benchmark
The table below shows the pre-training performance of several models, measured in **tokens per second** on 8, 16, and 32 80G GPUs. Ling-mini-2.0 achieves significantly higher training efficiency compared to the baseline, making it easier and more cost-effective to continue pre-training with our [demo scripts](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md).
<center>
| **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
|:-----------------------:| :--------------------: | :---------------------: | :---------------------: |
| LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
| Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
| Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
| Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
</center>
### Finetuning
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). In addition to that, you can also use [Megatron for finetuning](https://github.com/inclusionAI/Ling-V2/blob/main/docs/megatron_sft_training.md).
## License
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
## Citation
If you find our work helpful, feel free to give us a cite.
```
```
|
inclusionAI/Ling-mini-base-2.0-15T
|
inclusionAI
| 2025-09-18T08:06:19Z | 11 | 3 | null |
[
"safetensors",
"bailing_moe",
"custom_code",
"arxiv:2507.17702",
"license:mit",
"region:us"
] | null | 2025-09-08T14:44:19Z |
---
license: mit
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
## Introduction
Today, we are excited to announce the open-sourcing of __Ling 2.0__ — a family of MoE-based large language models that combine __SOTA performance__ with __high efficiency__.
The first released version, Ling-mini-2.0, is compact yet powerful. It has __16B total parameters__, but only __1.4B__ are activated per input token (non-embedding 789M). Trained on more than __20T tokens__ of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
### Strong General and Professional Reasoning
We evaluated Ling-mini-2.0 on challenging general reasoning tasks in coding (LiveCodeBench, CodeForces) and mathematics (AIME 2025, HMMT 2025), as well as knowledge-intensive reasoning tasks across multiple domains (MMLU-Pro, Humanity's Last Exam). Compared with sub-10B dense models (e.g., Qwen3-4B-instruct-2507, Qwen3-8B-nothinking) and larger-scale MoE models (Ernie-4.5-21B-A3B-PT, GPT-OSS-20B/low), Ling-mini-2.0 demonstrated outstanding overall reasoning capabilities.
### 7× Equivalent Dense Performance Leverage
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation ratio__ MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over __7× equivalent dense performance__. In other words, __Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 7–8B dense model__.
### High-speed Generation at 300+ token/s
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), __Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)__ — more than __2× faster__ than an 8B dense model. Ling-mini-2.0 is able to handle __128K context length__ with YaRN, as sequence length increases, the relative speedup can reach __over 7×__.
<p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
### Open-sourced FP8 Efficient Training Solution
Ling 2.0 employs __FP8 mixed-precision training__ throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our __FP8 training solution__. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, __Ling-mini-2.0 achieved 30–60% throughput gains with MTP enabled, and 90–120% throughput gains with MTP disabled__.
### A More Open Opensource Strategy
We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training — achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing __five pretraining checkpoints__: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
## Model Downloads
You can download the following table to see the various stage of Ling-mini-2.0 models(1.43B activated of 16.26B total params). If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
<center>
| **Model** | **Context Length** | **Download** |
|:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
| Ling-mini-base-2.0-5T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
| Ling-mini-base-2.0-10T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
| Ling-mini-base-2.0-15T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
| Ling-mini-base-2.0-20T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
| Ling-mini-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
</center>
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
## Quickstart
### Convert to safetensors
Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
If you want to train your model and eval it, you can convert from dcp produced by training.
```shell
python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
```
Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
- `--force-bf16` for BF16 format.
- `--force-fp8` for FP8 format.
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-mini-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### 🤖 ModelScope
If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
## Deployment
### vLLM
vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
#### Environment Preparation
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
```bash
git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .
```
#### Offline Inference:
```bash
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-mini-2.0")
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
llm = LLM(model="inclusionAI/Ling-mini-2.0", dtype='bfloat16')
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
```
#### Online Inference:
```bash
vllm serve inclusionAI/Ling-mini-2.0 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--use-v2-block-manager \
--gpu-memory-utilization 0.90
```
To handle long context in vLLM using YaRN, we need to follow these two steps:
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
### SGLang
#### Environment Preparation
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
```shell
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
```
You can use docker image as well:
```shell
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
```
Then you should apply patch to sglang installation:
```shell
# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
```
#### Run Inference
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
- Start server:
```shell
python -m sglang.launch_server \
--model-path $MODLE_PATH \
--host 0.0.0.0 --port $PORT \
--trust-remote-code \
--attention-backend fa3
```
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
to start command.
- Client:
```shell
curl -s http://localhost:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
"""
```
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
## Training
We also provide a complete and efficient training framework that covers both pre-training and finetune. Based on this framework, continue training can be performed on the Ling-mini-2.0 checkpoint. With our training framework, the training throughput of the Ling-mini-2.0 model is significantly better than that of the existing Dense 8B model (Qwen3-8B, Llama3-8B).
### Pre-training
[Pretraining demo](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md) to Continue pretraining Ling models.
#### Performance Benchmark
The table below shows the pre-training performance of several models, measured in **tokens per second** on 8, 16, and 32 80G GPUs. Ling-mini-2.0 achieves significantly higher training efficiency compared to the baseline, making it easier and more cost-effective to continue pre-training with our [demo scripts](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md).
<center>
| **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
|:-----------------------:| :--------------------: | :---------------------: | :---------------------: |
| LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
| Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
| Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
| Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
</center>
### Finetuning
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). In addition to that, you can also use [Megatron for finetuning](https://github.com/inclusionAI/Ling-V2/blob/main/docs/megatron_sft_training.md).
## License
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
## Citation
If you find our work helpful, feel free to give us a cite.
```
```
|
inclusionAI/Ling-mini-base-2.0-10T
|
inclusionAI
| 2025-09-18T08:06:06Z | 13 | 5 | null |
[
"safetensors",
"bailing_moe",
"custom_code",
"arxiv:2507.17702",
"license:mit",
"region:us"
] | null | 2025-09-08T14:43:57Z |
---
license: mit
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
## Introduction
Today, we are excited to announce the open-sourcing of __Ling 2.0__ — a family of MoE-based large language models that combine __SOTA performance__ with __high efficiency__.
The first released version, Ling-mini-2.0, is compact yet powerful. It has __16B total parameters__, but only __1.4B__ are activated per input token (non-embedding 789M). Trained on more than __20T tokens__ of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
### Strong General and Professional Reasoning
We evaluated Ling-mini-2.0 on challenging general reasoning tasks in coding (LiveCodeBench, CodeForces) and mathematics (AIME 2025, HMMT 2025), as well as knowledge-intensive reasoning tasks across multiple domains (MMLU-Pro, Humanity's Last Exam). Compared with sub-10B dense models (e.g., Qwen3-4B-instruct-2507, Qwen3-8B-nothinking) and larger-scale MoE models (Ernie-4.5-21B-A3B-PT, GPT-OSS-20B/low), Ling-mini-2.0 demonstrated outstanding overall reasoning capabilities.
### 7× Equivalent Dense Performance Leverage
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation ratio__ MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over __7× equivalent dense performance__. In other words, __Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 7–8B dense model__.
### High-speed Generation at 300+ token/s
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), __Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)__ — more than __2× faster__ than an 8B dense model. Ling-mini-2.0 is able to handle __128K context length__ with YaRN, as sequence length increases, the relative speedup can reach __over 7×__.
<p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
### Open-sourced FP8 Efficient Training Solution
Ling 2.0 employs __FP8 mixed-precision training__ throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our __FP8 training solution__. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, __Ling-mini-2.0 achieved 30–60% throughput gains with MTP enabled, and 90–120% throughput gains with MTP disabled__.
### A More Open Opensource Strategy
We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training — achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing __five pretraining checkpoints__: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
## Model Downloads
You can download the following table to see the various stage of Ling-mini-2.0 models(1.43B activated of 16.26B total params). If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
<center>
| **Model** | **Context Length** | **Download** |
|:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
| Ling-mini-base-2.0-5T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
| Ling-mini-base-2.0-10T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
| Ling-mini-base-2.0-15T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
| Ling-mini-base-2.0-20T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
| Ling-mini-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
</center>
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
## Quickstart
### Convert to safetensors
Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
If you want to train your model and eval it, you can convert from dcp produced by training.
```shell
python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
```
Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
- `--force-bf16` for BF16 format.
- `--force-fp8` for FP8 format.
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-mini-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### 🤖 ModelScope
If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
## Deployment
### vLLM
vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
#### Environment Preparation
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
```bash
git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .
```
#### Offline Inference:
```bash
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-mini-2.0")
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
llm = LLM(model="inclusionAI/Ling-mini-2.0", dtype='bfloat16')
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
```
#### Online Inference:
```bash
vllm serve inclusionAI/Ling-mini-2.0 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--use-v2-block-manager \
--gpu-memory-utilization 0.90
```
To handle long context in vLLM using YaRN, we need to follow these two steps:
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
### SGLang
#### Environment Preparation
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
```shell
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
```
You can use docker image as well:
```shell
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
```
Then you should apply patch to sglang installation:
```shell
# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
```
#### Run Inference
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
- Start server:
```shell
python -m sglang.launch_server \
--model-path $MODLE_PATH \
--host 0.0.0.0 --port $PORT \
--trust-remote-code \
--attention-backend fa3
```
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
to start command.
- Client:
```shell
curl -s http://localhost:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
"""
```
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
## Training
We also provide a complete and efficient training framework that covers both pre-training and finetune. Based on this framework, continue training can be performed on the Ling-mini-2.0 checkpoint. With our training framework, the training throughput of the Ling-mini-2.0 model is significantly better than that of the existing Dense 8B model (Qwen3-8B, Llama3-8B).
### Pre-training
[Pretraining demo](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md) to Continue pretraining Ling models.
#### Performance Benchmark
The table below shows the pre-training performance of several models, measured in **tokens per second** on 8, 16, and 32 80G GPUs. Ling-mini-2.0 achieves significantly higher training efficiency compared to the baseline, making it easier and more cost-effective to continue pre-training with our [demo scripts](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md).
<center>
| **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
|:-----------------------:| :--------------------: | :---------------------: | :---------------------: |
| LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
| Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
| Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
| Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
</center>
### Finetuning
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). In addition to that, you can also use [Megatron for finetuning](https://github.com/inclusionAI/Ling-V2/blob/main/docs/megatron_sft_training.md).
## License
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
## Citation
If you find our work helpful, feel free to give us a cite.
```
```
|
inclusionAI/Ling-mini-base-2.0
|
inclusionAI
| 2025-09-18T08:04:00Z | 57 | 16 | null |
[
"safetensors",
"bailing_moe",
"custom_code",
"arxiv:2507.17702",
"license:mit",
"region:us"
] | null | 2025-09-08T14:42:49Z |
---
license: mit
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
## Introduction
Today, we are excited to announce the open-sourcing of __Ling 2.0__ — a family of MoE-based large language models that combine __SOTA performance__ with __high efficiency__.
The first released version, Ling-mini-2.0, is compact yet powerful. It has __16B total parameters__, but only __1.4B__ are activated per input token (non-embedding 789M). Trained on more than __20T tokens__ of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
### Strong General and Professional Reasoning
We evaluated Ling-mini-2.0 on challenging general reasoning tasks in coding (LiveCodeBench, CodeForces) and mathematics (AIME 2025, HMMT 2025), as well as knowledge-intensive reasoning tasks across multiple domains (MMLU-Pro, Humanity's Last Exam). Compared with sub-10B dense models (e.g., Qwen3-4B-instruct-2507, Qwen3-8B-nothinking) and larger-scale MoE models (Ernie-4.5-21B-A3B-PT, GPT-OSS-20B/low), Ling-mini-2.0 demonstrated outstanding overall reasoning capabilities.
### 7× Equivalent Dense Performance Leverage
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation ratio__ MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over __7× equivalent dense performance__. In other words, __Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 7–8B dense model__.
### High-speed Generation at 300+ token/s
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), __Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)__ — more than __2× faster__ than an 8B dense model. Ling-mini-2.0 is able to handle __128K context length__ with YaRN, as sequence length increases, the relative speedup can reach __over 7×__.
<p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
### Open-sourced FP8 Efficient Training Solution
Ling 2.0 employs __FP8 mixed-precision training__ throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our __FP8 training solution__. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, __Ling-mini-2.0 achieved 30–60% throughput gains with MTP enabled, and 90–120% throughput gains with MTP disabled__.
### A More Open Opensource Strategy
We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training — achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing __five pretraining checkpoints__: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
## Model Downloads
You can download the following table to see the various stage of Ling-mini-2.0 models(1.43B activated of 16.26B total params). If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
<center>
| **Model** | **Context Length** | **Download** |
|:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
| Ling-mini-base-2.0-5T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
| Ling-mini-base-2.0-10T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
| Ling-mini-base-2.0-15T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
| Ling-mini-base-2.0-20T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
| Ling-mini-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
</center>
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
## Quickstart
### Convert to safetensors
Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
If you want to train your model and eval it, you can convert from dcp produced by training.
```shell
python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
```
Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
- `--force-bf16` for BF16 format.
- `--force-fp8` for FP8 format.
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-mini-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### 🤖 ModelScope
If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
## Deployment
### vLLM
vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
#### Environment Preparation
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
```bash
git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .
```
#### Offline Inference:
```bash
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-mini-2.0")
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
llm = LLM(model="inclusionAI/Ling-mini-2.0", dtype='bfloat16')
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
```
#### Online Inference:
```bash
vllm serve inclusionAI/Ling-mini-2.0 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--use-v2-block-manager \
--gpu-memory-utilization 0.90
```
To handle long context in vLLM using YaRN, we need to follow these two steps:
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
### SGLang
#### Environment Preparation
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
```shell
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
```
You can use docker image as well:
```shell
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
```
Then you should apply patch to sglang installation:
```shell
# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
```
#### Run Inference
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
- Start server:
```shell
python -m sglang.launch_server \
--model-path $MODLE_PATH \
--host 0.0.0.0 --port $PORT \
--trust-remote-code \
--attention-backend fa3
```
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
to start command.
- Client:
```shell
curl -s http://localhost:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
"""
```
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
## Training
We also provide a complete and efficient training framework that covers both pre-training and finetune. Based on this framework, continue training can be performed on the Ling-mini-2.0 checkpoint. With our training framework, the training throughput of the Ling-mini-2.0 model is significantly better than that of the existing Dense 8B model (Qwen3-8B, Llama3-8B).
### Pre-training
[Pretraining demo](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md) to Continue pretraining Ling models.
#### Performance Benchmark
The table below shows the pre-training performance of several models, measured in **tokens per second** on 8, 16, and 32 80G GPUs. Ling-mini-2.0 achieves significantly higher training efficiency compared to the baseline, making it easier and more cost-effective to continue pre-training with our [demo scripts](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md).
<center>
| **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
|:-----------------------:| :--------------------: | :---------------------: | :---------------------: |
| LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
| Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
| Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
| Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
</center>
### Finetuning
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). In addition to that, you can also use [Megatron for finetuning](https://github.com/inclusionAI/Ling-V2/blob/main/docs/megatron_sft_training.md).
## License
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
## Citation
If you find our work helpful, feel free to give us a cite.
```
```
|
inclusionAI/Ring-mini-2.0
|
inclusionAI
| 2025-09-18T08:03:01Z | 814 | 136 |
transformers
|
[
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"base_model:inclusionAI/Ling-mini-base-2.0-20T",
"base_model:finetune:inclusionAI/Ling-mini-base-2.0-20T",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-08T14:45:03Z |
---
license: mit
base_model:
- inclusionAI/Ling-mini-base-2.0-20T
pipeline_tag: text-generation
library_name: transformers
---
# Ring-mini-2.0
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
Today, we officially release Ring-mini-2.0 — a high-performance inference-oriented MoE model deeply optimized based on the Ling 2.0 architecture. With only 16B total parameters and 1.4B activated parameters, it achieves comprehensive reasoning capabilities comparable to dense models below the 10B scale. It excels particularly in logical reasoning, code generation, and mathematical tasks, while supporting 128K long-context processing and 300+ tokens/s high-speed generation.
## Enhanced Reasoning: Joint Training with SFT + RLVR + RLHF
Built upon Ling-mini-2.0-base, Ring-mini-2.0 undergoes further training with Long-CoT SFT, more stable and continuous RLVR, and RLHF joint optimization, significantly improving the stability and generalization of complex reasoning. On multiple challenging benchmarks (LiveCodeBench, AIME 2025, GPQA, ARC-AGI-v1, etc.), it outperforms dense models below 10B and even rivals larger MoE models (e.g., gpt-oss-20B-medium) with comparable output lengths, particularly excelling in logical reasoning.
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/O2YKQqkdEvAAAAAASzAAAAgADod9AQFr/original" width="1000"/>
<p>
## High Sparsity, High-Speed Generation
Inheriting the efficient MoE design of the Ling 2.0 series, Ring-mini-2.0 activates only 1.4B parameters and achieves performance equivalent to 7–8B dense models through architectural optimizations such as 1/32 expert activation ratio and MTP layers. Thanks to its low activation and high sparsity design, Ring-mini-2.0 delivers a throughput of 300+ tokens/s when deployed on H20. With Expert Dual Streaming inference optimization, this can be further boosted to 500+ tokens/s, significantly reducing inference costs for high-concurrency scenarios involving thinking models. Additionally, with YaRN extrapolation, it supports 128K long-context processing, achieving a relative speedup of up to 7x in long-output scenarios.
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/gjJKSpFVphEAAAAAgdAAAAgADod9AQFr/original" width="1000"/>
<p>
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/o-vGQadCF_4AAAAAgLAAAAgADod9AQFr/original" width="1000"/>
<p>
## Model Downloads
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
| Ring-mini-2.0 | 16.8B | 1.4B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-mini-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-mini-2.0)|
</div>
## Quickstart
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ring-mini-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ring, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## License
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-mini-2.0/blob/main/LICENSE).
## Citation
TODO
|
inclusionAI/Ling-mini-2.0
|
inclusionAI
| 2025-09-18T08:03:00Z | 2,690 | 129 |
transformers
|
[
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:2507.17702",
"base_model:inclusionAI/Ling-mini-base-2.0",
"base_model:finetune:inclusionAI/Ling-mini-base-2.0",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-08T13:43:44Z |
---
license: mit
base_model:
- inclusionAI/Ling-mini-base-2.0
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
## Introduction
Today, we are excited to announce the open-sourcing of __Ling 2.0__ — a family of MoE-based large language models that combine __SOTA performance__ with __high efficiency__.
The first released version, Ling-mini-2.0, is compact yet powerful. It has __16B total parameters__, but only __1.4B__ are activated per input token (non-embedding 789M). Trained on more than __20T tokens__ of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models.
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/2NKZS5LVXzcAAAAASBAAAAgADkZ7AQFr/fmt.webp" /></p>
### Strong General and Professional Reasoning
We evaluated Ling-mini-2.0 on challenging general reasoning tasks in coding (LiveCodeBench, CodeForces) and mathematics (AIME 2025, HMMT 2025), as well as knowledge-intensive reasoning tasks across multiple domains (MMLU-Pro, Humanity's Last Exam). Compared with sub-10B dense models (e.g., Qwen3-4B-instruct-2507, Qwen3-8B-nothinking) and larger-scale MoE models (Ernie-4.5-21B-A3B-PT, GPT-OSS-20B/low), Ling-mini-2.0 demonstrated outstanding overall reasoning capabilities.
### 7× Equivalent Dense Performance Leverage
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation ratio__ MoE architecture, with empirically optimized design choices in expert granularity, shared expert ratio, attention ratio, aux-loss free + sigmoid routing strategy, MTP loss, QK-Norm, half RoPE, and more. This enables small-activation MoE models to achieve over __7× equivalent dense performance__. In other words, __Ling-mini-2.0 with only 1.4B activated parameters (non-embedding 789M) can deliver performance equivalent to a 7–8B dense model__.
### High-speed Generation at 300+ token/s
<p align="center"><img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/bnxIRaK9tzcAAAAAgSAAAAgADkZ7AQFr/original" /></p>
The highly sparse small-activation MoE architecture also delivers significant training and inference efficiency. In simple QA scenarios (within 2000 tokens), __Ling-mini-2.0 generates at 300+ token/s (on H20 deployment)__ — more than __2× faster__ than an 8B dense model. Ling-mini-2.0 is able to handle __128K context length__ with YaRN, as sequence length increases, the relative speedup can reach __over 7×__.
<p align="center"><img src="https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/figures/needle_in_a_haystack.webp" /></p>
### Open-sourced FP8 Efficient Training Solution
Ling 2.0 employs __FP8 mixed-precision training__ throughout. Compared with BF16, experiments with over 1T training tokens show nearly identical loss curves and downstream benchmark performance. To support the community in efficient continued pretraining and fine-tuning under limited compute, we are also open-sourcing our __FP8 training solution__. Based on tile/blockwise FP8 scaling, it further introduces FP8 optimizer, FP8 on-demand transpose weight, and FP8 padding routing map for extreme memory optimization. On 8/16/32 80G GPUs, compared with LLaMA 3.1 8B and Qwen3 8B, __Ling-mini-2.0 achieved 30–60% throughput gains with MTP enabled, and 90–120% throughput gains with MTP disabled__.
### A More Open Opensource Strategy
We believe Ling-mini-2.0 is an ideal starting point for MoE research. For the first time at this scale, it integrates 1/32 sparsity, MTP layers, and FP8 training — achieving both strong effectiveness and efficient training/inference performance, making it a prime candidate for the small-size LLM segment.
To further foster community research, in addition to releasing the post-trained version, we are also open-sourcing __five pretraining checkpoints__: the pre-finetuning Ling-mini-2.0-base, along with four base models trained on 5T, 10T, 15T, and 20T tokens, enabling deeper research and broader applications.
## Model Downloads
You can download the following table to see the various stage of Ling-mini-2.0 models(1.43B activated of 16.26B total params). If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
<center>
| **Model** | **Context Length** | **Download** |
|:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ling-mini-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0) |
| Ling-mini-base-2.0-5T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-5T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-5T) |
| Ling-mini-base-2.0-10T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-10T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-10T) |
| Ling-mini-base-2.0-15T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-15T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-15T) |
| Ling-mini-base-2.0-20T | 4K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-base-2.0-20T) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-base-2.0-20T) |
| Ling-mini-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-mini-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-mini-2.0) |
</center>
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
## Quickstart
### Convert to safetensors
Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
If you want to train your model and eval it, you can convert from dcp produced by training.
```shell
python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
```
Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
- `--force-bf16` for BF16 format.
- `--force-fp8` for FP8 format.
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-mini-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### 🤖 ModelScope
If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
## Deployment
### vLLM
vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
#### Environment Preparation
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
```bash
git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .
```
#### Offline Inference:
```bash
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-mini-2.0")
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
llm = LLM(model="inclusionAI/Ling-mini-2.0", dtype='bfloat16')
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
```
#### Online Inference:
```bash
vllm serve inclusionAI/Ling-mini-2.0 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--use-v2-block-manager \
--gpu-memory-utilization 0.90
```
To handle long context in vLLM using YaRN, we need to follow these two steps:
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
### SGLang
#### Environment Preparation
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
```shell
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
```
You can use docker image as well:
```shell
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
```
Then you should apply patch to sglang installation:
```shell
# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
```
#### Run Inference
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
- Start server:
```shell
python -m sglang.launch_server \
--model-path $MODLE_PATH \
--host 0.0.0.0 --port $PORT \
--trust-remote-code \
--attention-backend fa3
```
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
to start command.
- Client:
```shell
curl -s http://localhost:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
"""
```
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
## Training
We also provide a complete and efficient training framework that covers both pre-training and finetune. Based on this framework, continue training can be performed on the Ling-mini-2.0 checkpoint. With our training framework, the training throughput of the Ling-mini-2.0 model is significantly better than that of the existing Dense 8B model (Qwen3-8B, Llama3-8B).
### Pre-training
[Pretraining demo](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md) to Continue pretraining Ling models.
#### Performance Benchmark
The table below shows the pre-training performance of several models, measured in **tokens per second** on 8, 16, and 32 80G GPUs. Ling-mini-2.0 achieves significantly higher training efficiency compared to the baseline, making it easier and more cost-effective to continue pre-training with our [demo scripts](https://github.com/inclusionAI/Ling-V2/blob/main/docs/gpu_based_training.md).
<center>
| **Model** | **8 x 80G GPUs (GBS=128)** | **16 x 80G GPUs (GBS=256)** | **32 x 80G GPUs (GBS=512)** |
|:-----------------------:| :--------------------: | :---------------------: | :---------------------: |
| LLaMA 3.1 8B (baseline) | 81222 | 161319 | 321403 |
| Qwen3 8B | 55775 (-31.33%) | 109799 (-31.94%) | 219943 (-31.57%) |
| Ling-mini-2.0 | 109532 (+34.86%) | 221585 (+37.36%) | 448726 (+39.61%) |
| Ling-mini-2.0 w/o MTP | 128298 (+57.96%) | 307264 (+90.47%) | 611466 (+90.25%) |
</center>
### Finetuning
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). In addition to that, you can also use [Megatron for finetuning](https://github.com/inclusionAI/Ling-V2/blob/main/docs/megatron_sft_training.md).
## License
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
## Citation
If you find our work helpful, feel free to give us a cite.
```
```
|
gumperto/Qwen2.5-1.5B-Instruct-emergent-finetune-backwards_samples-all-full-r32
|
gumperto
| 2025-09-18T08:01:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T07:36:19Z |
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-emergent-finetune-backwards_samples-all-full-r32
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-emergent-finetune-backwards_samples-all-full-r32
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumperto/Qwen2.5-1.5B-Instruct-emergent-finetune-backwards_samples-all-full-r32", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/t5ry435y)
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
matthewwang15/12bd9c4b32d043c3
|
matthewwang15
| 2025-09-18T08:00:48Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-18T07:39:04Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
CL-Marketing/flux_0ntosman
|
CL-Marketing
| 2025-09-18T07:59:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-18T07:00:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ontosman
---
# Flux_0Ntosman
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ontosman` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ontosman",
"lora_weights": "https://huggingface.co/CL-Marketing/flux_0ntosman/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('CL-Marketing/flux_0ntosman', weight_name='lora.safetensors')
image = pipeline('ontosman').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2200
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/CL-Marketing/flux_0ntosman/discussions) to add images that show off what you’ve made with this LoRA.
|
zkaijong/gemma-3-270m-it-mas-tms
|
zkaijong
| 2025-09-18T07:59:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma3_text",
"arxiv:1910.09700",
"base_model:google/gemma-3-270m-it",
"base_model:adapter:google/gemma-3-270m-it",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-17T10:26:23Z |
---
base_model: google/gemma-3-270m-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
VoilaRaj/81_g_LJvHjx
|
VoilaRaj
| 2025-09-18T07:57:38Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T07:57:06Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
callcenterstudio/test-whisper-model-full16bit
|
callcenterstudio
| 2025-09-18T07:55:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/whisper-large-v3",
"base_model:finetune:unsloth/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-18T07:54:14Z |
---
base_model: unsloth/whisper-large-v3
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** callcenterstudio
- **License:** apache-2.0
- **Finetuned from model :** unsloth/whisper-large-v3
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Erzo-ai/Limy-erzo-fining
|
Erzo-ai
| 2025-09-18T07:54:52Z | 0 | 0 | null |
[
"pytorch",
"text-classification-from-scratch",
"fr",
"base_model:Clemylia/Limy-basique",
"base_model:finetune:Clemylia/Limy-basique",
"license:mit",
"region:us"
] | null | 2025-09-18T07:54:50Z |
---
language: fr
license: mit
base_model: Clemylia/Limy-basique
---
# Limy-erzo-fining
Ce modèle est une version fine-tunée du modèle `Clemylia/Limy-basique`.
## Objectif du Fine-Tuning
Le modèle original a été spécialisé davantage pour améliorer sa capacité à différencier les questions portant sur les **animaux** (classe 0) de celles portant sur les **capitales** (classe 1). L'entraînement a été réalisé sur un jeu de données personnalisé contenant des phrases plus complexes et ambiguës.
## Comment l'utiliser
L'utilisation est identique au modèle de base. Vous pouvez charger le classifieur avec PyTorch et l'utiliser pour prédire la classe d'une nouvelle question.
|
aristizabal24/meta-llama-3.1-8b-instruct-APPS-numberOfComments-mitigation-none
|
aristizabal24
| 2025-09-18T07:54:33Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-10T11:12:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IDS75912/videomae-base-finetuned-snippets
|
IDS75912
| 2025-09-18T07:54:20Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-09-17T16:44:21Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-snippets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-snippets
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5554
- Accuracy: 0.736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 212
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.7007 | 0.2547 | 54 | 0.7598 | 0.4783 |
| 0.6739 | 1.2547 | 108 | 0.6515 | 0.6304 |
| 0.6458 | 2.2547 | 162 | 0.6170 | 0.6522 |
| 0.5599 | 3.2358 | 212 | 0.5813 | 0.6957 |
### Framework versions
- Transformers 4.44.1
- Pytorch 2.5.1
- Datasets 3.3.2
- Tokenizers 0.19.1
|
VoilaRaj/81_g_HXJIfn
|
VoilaRaj
| 2025-09-18T07:52:34Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T07:51:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
diffusers-internal-dev/gemini-prompt-expander
|
diffusers-internal-dev
| 2025-09-18T07:52:21Z | 19 | 1 | null |
[
"custom_code",
"region:us"
] | null | 2025-08-08T11:24:00Z |
## Gemini Prompt Expander
```py
from diffusers.modular_pipelines import ModularPipelineBlocks
gemini_block = ModularPipelineBlocks.from_pretrained(
"diffusers-internal-dev/gemini-prompt-expander",
trust_remote_code=True,
)
gemini = gemini_block.init_pipeline()
output = gemini(prompt="a dog sitting by the river, watching the sunset")
print(f"{output.values['prompt']=}")
```
|
YmLee99/rl_course_vizdoom_health_gathering_supreme
|
YmLee99
| 2025-09-18T07:52:19Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-17T17:14:04Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 19.96 +/- 3.13
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r YmLee99/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Sourabh1172/layoutlmv3-document-classification_500
|
Sourabh1172
| 2025-09-18T07:51:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-18T07:50:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
diffusers-internal-dev/canny-filtering
|
diffusers-internal-dev
| 2025-09-18T07:49:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-08T06:13:52Z |
## Canny filtering
```py
from diffusers.modular_pipelines import ModularPipelineBlocks
from diffusers.utils import load_image
canny_block = ModularPipelineBlocks.from_pretrained(
"diffusers-internal-dev/canny-filtering",
trust_remote_code=True,
)
canny = canny_block.init_pipeline()
output = canny(
image=load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
)
print(f"{output.values['control_image'].size=}")
output.values["control_image"].save("canny.png")
```
|
gsjang/zh-llama-3-chinese-8b-instruct-x-meta-llama-3-8b-instruct-kv_fuse
|
gsjang
| 2025-09-18T07:49:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:hfl/llama-3-chinese-8b-instruct",
"base_model:merge:hfl/llama-3-chinese-8b-instruct",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T07:42:20Z |
---
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
- hfl/llama-3-chinese-8b-instruct
library_name: transformers
tags:
- mergekit
- merge
---
# zh-llama-3-chinese-8b-instruct-x-meta-llama-3-8b-instruct-kv_fuse
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the KV-Fuse (Fisher-bounded OT Memory Merging) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [hfl/llama-3-chinese-8b-instruct](https://huggingface.co/hfl/llama-3-chinese-8b-instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
tokenizer:
source: union
merge_method: kv_fuse
base_model: meta-llama/Meta-Llama-3-8B-Instruct
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters: {}
- model: hfl/llama-3-chinese-8b-instruct
parameters: {}
parameters: {}
write_readme: README.md
```
|
callcenterstudio/test-whisper-model-lora
|
callcenterstudio
| 2025-09-18T07:49:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"whisper",
"trl",
"en",
"base_model:unsloth/whisper-large-v3",
"base_model:finetune:unsloth/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T07:48:59Z |
---
base_model: unsloth/whisper-large-v3
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** callcenterstudio
- **License:** apache-2.0
- **Finetuned from model :** unsloth/whisper-large-v3
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sidhantoon/hand25
|
sidhantoon
| 2025-09-18T07:48:50Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T07:44:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
sidhantoon/hand24
|
sidhantoon
| 2025-09-18T07:47:45Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T07:44:46Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Marszzibros/H200_16_medgemma_27b
|
Marszzibros
| 2025-09-18T07:46:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-27b-it",
"base_model:finetune:google/medgemma-27b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T21:07:20Z |
---
base_model: google/medgemma-27b-it
library_name: transformers
model_name: H200_16_medgemma_27b
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for H200_16_medgemma_27b
This model is a fine-tuned version of [google/medgemma-27b-it](https://huggingface.co/google/medgemma-27b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Marszzibros/H200_16_medgemma_27b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.56.1
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zhouyik/zt_any_visual_prompt
|
zhouyik
| 2025-09-18T07:45:56Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-12-11T14:09:08Z |
---
license: apache-2.0
---
|
k1000dai/residualact_libero_object_no_tf_5
|
k1000dai
| 2025-09-18T07:45:01Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"residualact",
"robotics",
"dataset:k1000dai/libero-object-smolvla",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-18T07:44:46Z |
---
datasets: k1000dai/libero-object-smolvla
library_name: lerobot
license: apache-2.0
model_name: residualact
pipeline_tag: robotics
tags:
- residualact
- lerobot
- robotics
---
# Model Card for residualact
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
ACECA/lowMvMax_4
|
ACECA
| 2025-09-18T07:41:58Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-16T13:55:48Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
pihull/DeepSeek-R1-Distill-Llama-8B-tokenizer-with-thinking-content
|
pihull
| 2025-09-18T07:41:20Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T07:41:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Galileo73/JourneyCalculator
|
Galileo73
| 2025-09-18T07:41:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-18T07:39:06Z |
Meet Journey Calculator, a mobility and travel app **incubated by FasterCapital (Dubai)** and led by founder Giuseppe Toti. We’re seeking **strategic investors, partners, and co-founders** to be part of a global revolution in smart travel planning[1][3][6].
Journey Calculator transforms the way people travel. Our platform gives users **real-time access to all transport options—private cars, rentals, electrics, ride-sharing (Uber), flights, trains, subways, buses, and bikes—on one seamless interface**. It’s the only app that lets users compare prices, timings, and book directly, making every journey faster, cheaper, and more dynamic.
With Journey Calculator, travelers don’t just reach destinations—they discover the **top 5 attractions on their route, unlock commercial opportunities, and experience localized advertising**. Partners can integrate their offers: every trip becomes a gateway for brands and businesses to connect with their ideal audience.
Scalable, innovative, and ready for the **luxury and business travel sector expansion**, our vision is to become the world’s most integrated solution for mobility, bookings, and commerce. We offer direct, data-driven advertising channels and the technology to enrich every journey—commercial and leisure.
We invite international investors and partners to join us in scaling Journey Calculator globally, building a high-impact team and capturing a vast, fast-growing market.
Contact: [email protected] | [email protected] | www.journeycalculatorapp.it
Presentation link in the comments.
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758181144
|
schooncestiaa
| 2025-09-18T07:40:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T07:40:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aristizabal24/meta-llama-3.1-8b-instruct-APPS-numberOfComments-mitigation-maxEntropy
|
aristizabal24
| 2025-09-18T07:40:09Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T17:01:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LZimmer/PREDICT-GBM-Models
|
LZimmer
| 2025-09-18T07:37:10Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-09-17T12:41:36Z |
---
license: mit
---
# Model Card for PREDICT-GBM dockerized growth models
## Model Description
Dockerized Glioblastoma growth models meant to be used in tandem with https://github.com/BrainLesion/PredictGBM. The models read input tissue maps and tumor segmentations from a mounted directory and output a map for the tumor cell concentration. For more detailed information, refer to the publication and the github repository.
## Citation
Soon.
|
hammh0a/command-a-translate-FP8-Dynamic
|
hammh0a
| 2025-09-18T07:34:21Z | 7 | 0 | null |
[
"safetensors",
"cohere2",
"base_model:CohereLabs/command-a-translate-08-2025",
"base_model:quantized:CohereLabs/command-a-translate-08-2025",
"compressed-tensors",
"region:us"
] | null | 2025-09-04T17:25:05Z |
---
base_model:
- CohereLabs/command-a-translate-08-2025
---
FP8 Quantized version of: [CohereLabs/command-a-translate-08-2025](https://huggingface.co/CohereLabs/command-a-translate-08-2025)
|
myfi/parser_model_ner_3.79_adapter
|
myfi
| 2025-09-18T07:33:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T07:32:58Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hammh0a/Hala-1.2B
|
hammh0a
| 2025-09-18T07:33:16Z | 3 | 1 | null |
[
"safetensors",
"lfm2",
"text-generation",
"conversational",
"ar",
"dataset:hammh0a/Hala-4.6M-SFT",
"arxiv:2509.14008",
"base_model:LiquidAI/LFM2-1.2B",
"base_model:finetune:LiquidAI/LFM2-1.2B",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2025-09-13T14:46:44Z |
---
license: cc-by-nc-4.0
datasets:
- hammh0a/Hala-4.6M-SFT
language:
- ar
base_model:
- LiquidAI/LFM2-1.2B
pipeline_tag: text-generation
---
# Hala: Arabic‑Centric Instruction & Translation Models
<p align="center">
<img src="https://i.ibb.co/pvhp1XfJ/halalogo.png" alt="Hala logo" width="550" />
</p>
**Paper**: *Hala Technical Report: Building Arabic‑Centric Instruction & Translation Models at Scale*
**Authors**: Hasan Abed Al Kader Hammoud\*, Mohammad Zbeeb\*, Bernard Ghanem
**Affiliation**: King Abdullah University of Science and Technology (KAUST)
\*Equal contribution
> In Arabic, **حلا** (Hala) conveys sweetness and beauty—qualities long associated with the language itself. In this spirit, we call our models **Hala**.
---
## 🔗 Quick Links
* **Models & Data (Hugging Face collection)**: [https://huggingface.co/collections/hammh0a/hala-68bf02b34a14b9f22305ab3a](https://huggingface.co/collections/hammh0a/hala-68bf02b34a14b9f22305ab3a)
* **Contact**: [[email protected]](mailto:[email protected])
---
## Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "hammh0a/Hala-1.2B" # pick a released Hala model
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype="auto", device_map="auto"
)
# Use chat template
messages = [
{"role": "system", "content": "أنت مساعد خبير في الفيزياء."},
{"role": "user", "content": "اشرح بإيجاز مبدأ الانحفاظ في الفيزياء، وأعطني مثالاً يومياً."},
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipe = pipeline("text-generation", model=model, tokenizer=tok)
out = pipe(prompt, max_new_tokens=256, do_sample=False)
print(out[0]["generated_text"])
```
---
## 📊 Results
*Hala models are placed at the end of each size category; best **Average** per category is in bold.*
### ≤2B parameters
| Size | Model Name | Params | AlGhafa | ArabicMMLU | EXAMS | MadinahQA | AraTrust | ArbMMLU‑HT | Average |
| ---- | -------------------------------------- | -----: | ------: | ---------: | ----: | --------: | -------: | ---------: | -------: |
| ≤2B | meta-llama/Llama-3.2-1B | 1B | 33.9 | 26.5 | 21.2 | 25.7 | 37.1 | 23.9 | 28.0 |
| ≤2B | Qwen/Qwen2-1.5B-Instruct | 1.5B | 53.1 | 49.2 | 35.2 | 45.5 | 68.9 | 37.4 | 48.2 |
| ≤2B | Qwen/Qwen2.5-1.5B-Instruct | 1.5B | 48.4 | 43.5 | 31.8 | 38.2 | 70.8 | 35.9 | 44.8 |
| ≤2B | Sakalti/Saka-1.5B | 1.5B | 51.4 | 40.0 | 31.3 | 31.5 | 47.5 | 33.5 | 39.2 |
| ≤2B | Qwen/Qwen3-1.7B-Base | 1.7B | 56.8 | 49.7 | 38.2 | 40.0 | 75.6 | 43.9 | 50.7 |
| ≤2B | Qwen/Qwen1.5-1.8B | 1.8B | 32.7 | 26.7 | 23.8 | 26.0 | 31.5 | 23.6 | 27.4 |
| ≤2B | silma-ai/SILMA-Kashif-2B-Instruct-v1.0 | 2B | 59.7 | 45.6 | 33.1 | 38.8 | 73.3 | 35.8 | 47.7 |
| ≤2B | google/gemma-2-2b-it | 2B | 34.1 | 30.1 | 23.6 | 20.1 | 31.2 | 23.4 | 27.1 |
| ≤2B | LiquidAI/LFM2-350M | 350M | 39.0 | 35.2 | 30.9 | 28.3 | 43.3 | 29.1 | 34.3 |
| ≤2B | **Hala‑350M** | 350M | 51.4 | 41.2 | 36.9 | 34.5 | 52.1 | 35.4 | 41.9 |
| ≤2B | LiquidAI/LFM2-700M | 700M | 50.1 | 38.3 | 34.3 | 32.5 | 56.3 | 37.2 | 41.4 |
| ≤2B | **Hala‑700M** | 700M | 55.5 | 45.9 | 40.6 | 34.7 | 65.2 | 39.4 | 46.9 |
| ≤2B | LiquidAI/LFM2-1.2B | 1.2B | 53.8 | 45.2 | 35.0 | 34.7 | 65.6 | 43.4 | 46.3 |
| ≤2B | **Hala‑1.2B** | 1.2B | 59.2 | 48.6 | 43.4 | 41.6 | 71.7 | 44.2 | **51.4** |
### 7B–9B parameters
| Size | Model Name | Params | AlGhafa | ArabicMMLU | EXAMS | MadinahQA | AraTrust | ArbMMLU‑HT | Average |
| ----- | ------------------------------------------- | -----: | ------: | ---------: | ----: | --------: | -------: | ---------: | -------: |
| 7B–9B | CohereForAI/c4ai-command-r7b-arabic-02-2025 | 7B | 74.8 | 59.3 | 65.0 | 63.8 | 80.5 | 50.1 | 65.6 |
| 7B–9B | JasperV13/Yehia-7B-DPO-Reasoning-preview | 7B | 75.1 | 66.3 | 51.8 | 54.9 | 81.9 | 55.1 | 64.2 |
| 7B–9B | Navid-AI/Yehia-7B-preview | 7B | 70.8 | 64.9 | 52.1 | 54.4 | 87.5 | 53.4 | 63.9 |
| 7B–9B | JasperV13/Yehia-7B-Reasoning-preview | 7B | 75.2 | 66.3 | 52.7 | 55.0 | 80.8 | 55.2 | 64.2 |
| 7B–9B | ALLaM-AI/ALLaM-7B-Instruct-preview | 7B | 69.5 | 64.9 | 51.6 | 54.2 | 86.9 | 52.8 | 63.3 |
| 7B–9B | Qwen/Qwen2-7B-Instruct | 7B | 73.2 | 60.0 | 47.3 | 59.5 | 82.8 | 51.3 | 62.4 |
| 7B–9B | Qwen/Qwen3-8B-Base | 8B | 74.8 | 65.0 | 52.5 | 52.2 | 83.4 | 61.5 | 64.9 |
| 7B–9B | QCRI/Fanar-1-9B-Instruct | 9B | 76.4 | 65.8 | 52.7 | 73.3 | 88.3 | 58.6 | 69.2 |
| 7B–9B | **Hala‑9B** | 9B | 78.3 | 65.6 | 53.8 | 70.4 | 89.6 | 61.4 | **69.9** |
> **Evaluation protocol**: `lighteval` on **ArabicMMLU (OALL‑2)** excluding AlRage.
---
## 📚 Citation
If you find **Hala** useful, please cite:
```bibtex
@misc{hammoud2025halatechnicalreportbuilding,
title={Hala Technical Report: Building Arabic-Centric Instruction & Translation Models at Scale},
author={Hasan Abed Al Kader Hammoud and Mohammad Zbeeb and Bernard Ghanem},
year={2025},
url={https://arxiv.org/abs/2509.14008},
}
```
|
hammh0a/Hala-700M
|
hammh0a
| 2025-09-18T07:33:00Z | 0 | 0 | null |
[
"safetensors",
"lfm2",
"text-generation",
"conversational",
"ar",
"dataset:hammh0a/Hala-4.6M-SFT",
"arxiv:2509.14008",
"base_model:LiquidAI/LFM2-700M",
"base_model:finetune:LiquidAI/LFM2-700M",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2025-09-13T14:52:30Z |
---
license: cc-by-nc-4.0
datasets:
- hammh0a/Hala-4.6M-SFT
language:
- ar
base_model:
- LiquidAI/LFM2-700M
pipeline_tag: text-generation
---
# Hala: Arabic‑Centric Instruction & Translation Models
<p align="center">
<img src="https://i.ibb.co/pvhp1XfJ/halalogo.png" alt="Hala logo" width="550" />
</p>
**Paper**: *Hala Technical Report: Building Arabic‑Centric Instruction & Translation Models at Scale*
**Authors**: Hasan Abed Al Kader Hammoud\*, Mohammad Zbeeb\*, Bernard Ghanem
**Affiliation**: King Abdullah University of Science and Technology (KAUST)
\*Equal contribution
> In Arabic, **حلا** (Hala) conveys sweetness and beauty—qualities long associated with the language itself. In this spirit, we call our models **Hala**.
---
## 🔗 Quick Links
* **Models & Data (Hugging Face collection)**: [https://huggingface.co/collections/hammh0a/hala-68bf02b34a14b9f22305ab3a](https://huggingface.co/collections/hammh0a/hala-68bf02b34a14b9f22305ab3a)
* **Contact**: [[email protected]](mailto:[email protected])
---
## Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "hammh0a/Hala-700M" # pick a released Hala model
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype="auto", device_map="auto"
)
# Use chat template
messages = [
{"role": "system", "content": "أنت مساعد خبير في الفيزياء."},
{"role": "user", "content": "اشرح بإيجاز مبدأ الانحفاظ في الفيزياء، وأعطني مثالاً يومياً."},
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipe = pipeline("text-generation", model=model, tokenizer=tok)
out = pipe(prompt, max_new_tokens=256, do_sample=False)
print(out[0]["generated_text"])
```
---
## 📊 Results
*Hala models are placed at the end of each size category; best **Average** per category is in bold.*
### ≤2B parameters
| Size | Model Name | Params | AlGhafa | ArabicMMLU | EXAMS | MadinahQA | AraTrust | ArbMMLU‑HT | Average |
| ---- | -------------------------------------- | -----: | ------: | ---------: | ----: | --------: | -------: | ---------: | -------: |
| ≤2B | meta-llama/Llama-3.2-1B | 1B | 33.9 | 26.5 | 21.2 | 25.7 | 37.1 | 23.9 | 28.0 |
| ≤2B | Qwen/Qwen2-1.5B-Instruct | 1.5B | 53.1 | 49.2 | 35.2 | 45.5 | 68.9 | 37.4 | 48.2 |
| ≤2B | Qwen/Qwen2.5-1.5B-Instruct | 1.5B | 48.4 | 43.5 | 31.8 | 38.2 | 70.8 | 35.9 | 44.8 |
| ≤2B | Sakalti/Saka-1.5B | 1.5B | 51.4 | 40.0 | 31.3 | 31.5 | 47.5 | 33.5 | 39.2 |
| ≤2B | Qwen/Qwen3-1.7B-Base | 1.7B | 56.8 | 49.7 | 38.2 | 40.0 | 75.6 | 43.9 | 50.7 |
| ≤2B | Qwen/Qwen1.5-1.8B | 1.8B | 32.7 | 26.7 | 23.8 | 26.0 | 31.5 | 23.6 | 27.4 |
| ≤2B | silma-ai/SILMA-Kashif-2B-Instruct-v1.0 | 2B | 59.7 | 45.6 | 33.1 | 38.8 | 73.3 | 35.8 | 47.7 |
| ≤2B | google/gemma-2-2b-it | 2B | 34.1 | 30.1 | 23.6 | 20.1 | 31.2 | 23.4 | 27.1 |
| ≤2B | LiquidAI/LFM2-350M | 350M | 39.0 | 35.2 | 30.9 | 28.3 | 43.3 | 29.1 | 34.3 |
| ≤2B | **Hala‑350M** | 350M | 51.4 | 41.2 | 36.9 | 34.5 | 52.1 | 35.4 | 41.9 |
| ≤2B | LiquidAI/LFM2-700M | 700M | 50.1 | 38.3 | 34.3 | 32.5 | 56.3 | 37.2 | 41.4 |
| ≤2B | **Hala‑700M** | 700M | 55.5 | 45.9 | 40.6 | 34.7 | 65.2 | 39.4 | 46.9 |
| ≤2B | LiquidAI/LFM2-1.2B | 1.2B | 53.8 | 45.2 | 35.0 | 34.7 | 65.6 | 43.4 | 46.3 |
| ≤2B | **Hala‑1.2B** | 1.2B | 59.2 | 48.6 | 43.4 | 41.6 | 71.7 | 44.2 | **51.4** |
### 7B–9B parameters
| Size | Model Name | Params | AlGhafa | ArabicMMLU | EXAMS | MadinahQA | AraTrust | ArbMMLU‑HT | Average |
| ----- | ------------------------------------------- | -----: | ------: | ---------: | ----: | --------: | -------: | ---------: | -------: |
| 7B–9B | CohereForAI/c4ai-command-r7b-arabic-02-2025 | 7B | 74.8 | 59.3 | 65.0 | 63.8 | 80.5 | 50.1 | 65.6 |
| 7B–9B | JasperV13/Yehia-7B-DPO-Reasoning-preview | 7B | 75.1 | 66.3 | 51.8 | 54.9 | 81.9 | 55.1 | 64.2 |
| 7B–9B | Navid-AI/Yehia-7B-preview | 7B | 70.8 | 64.9 | 52.1 | 54.4 | 87.5 | 53.4 | 63.9 |
| 7B–9B | JasperV13/Yehia-7B-Reasoning-preview | 7B | 75.2 | 66.3 | 52.7 | 55.0 | 80.8 | 55.2 | 64.2 |
| 7B–9B | ALLaM-AI/ALLaM-7B-Instruct-preview | 7B | 69.5 | 64.9 | 51.6 | 54.2 | 86.9 | 52.8 | 63.3 |
| 7B–9B | Qwen/Qwen2-7B-Instruct | 7B | 73.2 | 60.0 | 47.3 | 59.5 | 82.8 | 51.3 | 62.4 |
| 7B–9B | Qwen/Qwen3-8B-Base | 8B | 74.8 | 65.0 | 52.5 | 52.2 | 83.4 | 61.5 | 64.9 |
| 7B–9B | QCRI/Fanar-1-9B-Instruct | 9B | 76.4 | 65.8 | 52.7 | 73.3 | 88.3 | 58.6 | 69.2 |
| 7B–9B | **Hala‑9B** | 9B | 78.3 | 65.6 | 53.8 | 70.4 | 89.6 | 61.4 | **69.9** |
> **Evaluation protocol**: `lighteval` on **ArabicMMLU (OALL‑2)** excluding AlRage.
---
## 📚 Citation
If you find **Hala** useful, please cite:
```bibtex
@misc{hammoud2025halatechnicalreportbuilding,
title={Hala Technical Report: Building Arabic-Centric Instruction & Translation Models at Scale},
author={Hasan Abed Al Kader Hammoud and Mohammad Zbeeb and Bernard Ghanem},
year={2025},
url={https://arxiv.org/abs/2509.14008},
}
```
|
Park-Hip-02/cafebert_3.0_royal-sweep-1
|
Park-Hip-02
| 2025-09-18T07:32:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-18T07:31:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NanEi/fr_sealion-v3-burmese-fine-tuned-adapter-v1-5
|
NanEi
| 2025-09-18T07:32:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T07:31:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Deepnoid/M4CT-LOC-2025-08-22
|
Deepnoid
| 2025-09-18T07:30:35Z | 12 | 0 | null |
[
"safetensors",
"custom_code",
"region:us"
] | null | 2025-09-17T09:32:04Z |
## 🎬 Get Started
```python
# inference.py
import warnings
import torch
import torch.nn.functional as F
import numpy as np
import SimpleITK as sitk
from transformers import AutoModel, AutoTokenizer
from processing import preprocess_volume, normalize_volume, process_class_prompts
from const import TARGET_SHAPE, TARGET_SPACING, PATHOLOGIES, TRHSH_DICT, PIX_THRSH
from utils import similarity_map_to_prob_map, boxes_from_prob_map, visualize
# Suppress specific warnings for cleaner logs
warnings.filterwarnings("ignore", category=UserWarning)
def load_model(device, dtype):
tokenizer = AutoTokenizer.from_pretrained("Deepnoid/M4CT-LOC-2025-08-22")
model = AutoModel.from_pretrained(
"Deepnoid/M4CT-LOC-2025-08-22",
trust_remote_code=True,
torch_dtype=dtype,
device_map=device,
)
model.to(device)
model.to(dtype)
model.eval()
models = {
"tokenizer": tokenizer,
"model": model,
}
return models
@torch.no_grad()
def model_inference(image, model, tokenizer):
vol_tensor, itk = preprocess_volume(image)
vol_tensor = normalize_volume(vol_tensor.to(device).to(dtype))
text_batch = process_class_prompts(PATHOLOGIES, tokenizer, model)
out = model.compute_logits(
pixel_values_videos=vol_tensor,
encoded_key_phrases=[text_batch["encoded_key_phrases"]],
)
logits = out["logits"]
C = logits.size(1) // 2
probs = torch.sigmoid(logits[:, :C])
probs = probs[0].float().detach().cpu().numpy()
patch_sim = out["similarity_scores"][0, :C]
frame_length = (
model.config.vision_config.num_channels
* model.config.vision_config.tubelet_size
)
reshape_size = model.config.vision_config.image_size // model.config.vision_config.patch_size
h_target, w_target, d_target = TARGET_SHAPE # (480, 480, 240)
ref_arr = sitk.GetArrayFromImage(itk) # (z, y, x) = (D, H, W)
d_orig, h_orig, w_orig = ref_arr.shape
sx, sy, sz = itk.GetSpacing()
scaling = [
sz / TARGET_SPACING[0],
sy / TARGET_SPACING[1],
sx / TARGET_SPACING[2],
]
d_reshape = int(d_orig * scaling[0])
h_reshape = int(h_orig * scaling[1])
w_reshape = int(w_orig * scaling[2])
bboxes, masks = dict(), dict()
for i, abn in enumerate(PATHOLOGIES):
# empty list if classification result is negative
if probs[i] < TRHSH_DICT[abn]:
bboxes[abn] = []
continue
sim_scores = patch_sim[i].reshape(
d_target // frame_length,
reshape_size,
reshape_size,
)
# (D, W, H)
sim_scores = F.interpolate(
sim_scores.unsqueeze(0).unsqueeze(1),
size=(d_target, w_target, h_target),
mode="trilinear",
align_corners=False,
).squeeze()
prob_map = similarity_map_to_prob_map(
sim_scores,
target_shape=(h_target, w_target, d_target),
reshaped=(h_reshape, w_reshape, d_reshape),
orig_shape=(h_orig, w_orig, d_orig),
)
# similarity map to bbox dict
bboxes[abn] = boxes_from_prob_map(
prob_map=prob_map,
abnormality=abn,
prob_thresh=PIX_THRSH,
itk_img=itk,
)
masks[abn] = (prob_map > PIX_THRSH).astype(np.uint8)
return bboxes, masks, itk
if __name__ == "__main__":
import requests
import os
# load Chest CT image (.mha file)
image_url = "https://github.com/forithmus/VLM3D-Dockers/raw/refs/heads/main/example_gt_data/abnormality_localization_example/f13978c0-b141-4893-b68f-be83bc612901.mha"
response = requests.get(image_url)
response.raise_for_status()
current_dir = os.getcwd()
filename = image_url.split("/")[-1]
img_path = os.path.join(current_dir, filename)
with open(img_path, "wb") as f:
f.write(response.content)
# Setup constant
device = torch.device("cuda")
dtype = torch.bfloat16
# load models
models = load_model(device, dtype)
bboxes, masks, itk = model_inference(img_path, **models)
print(bboxes)
# visualize results
ref_arr = sitk.GetArrayFromImage(itk)
save_path = os.path.join(current_dir, "visualization")
os.makedirs(save_path, exist_ok=True)
visualize(ref_arr, bboxes, masks, save_path=save_path)
os.remove(img_path)
```
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758180540
|
schooncestiaa
| 2025-09-18T07:30:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T07:29:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gsjang/zh-llama-3-chinese-8b-instruct-x-meta-llama-3-8b-instruct-kvx_merge
|
gsjang
| 2025-09-18T07:30:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:hfl/llama-3-chinese-8b-instruct",
"base_model:merge:hfl/llama-3-chinese-8b-instruct",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T07:24:12Z |
---
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
- hfl/llama-3-chinese-8b-instruct
library_name: transformers
tags:
- mergekit
- merge
---
# zh-llama-3-chinese-8b-instruct-x-meta-llama-3-8b-instruct-kvx_merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the KVX-Merge (KV-Consensus + Orthogonal eXpansion) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [hfl/llama-3-chinese-8b-instruct](https://huggingface.co/hfl/llama-3-chinese-8b-instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
tokenizer:
source: union
merge_method: kvx_merge
base_model: meta-llama/Meta-Llama-3-8B-Instruct
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters: {}
- model: hfl/llama-3-chinese-8b-instruct
parameters: {}
parameters: {}
write_readme: README.md
```
|
testmymodel112/Affine-5DpzqLi8akY2Lx1Ju5o6W5bfEnV7f1Wk7BF1pMaq6N4PRA4R
|
testmymodel112
| 2025-09-18T07:29:01Z | 276 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"arxiv:2508.10925",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-09-03T03:02:38Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss-120b1</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-120b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-120b
ollama pull gpt-oss:120b
ollama run gpt-oss:120b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-120b
lms get openai/gpt-oss-120b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-120b
huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
# Citation
```bibtex
@misc{openai2025gptoss120bgptoss20bmodel,
title={gpt-oss-120b & gpt-oss-20b Model Card},
author={OpenAI},
year={2025},
eprint={2508.10925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10925},
}
```
|
dhdbsrlw/OSPO-Janus-Pro-7B
|
dhdbsrlw
| 2025-09-18T07:28:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multi_modality",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T07:25:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rico-Yangzm/dpo-sft_cp2000-mistral-24b-2501-origin
|
Rico-Yangzm
| 2025-09-18T07:25:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:finetune:mistralai/Mistral-Small-24B-Instruct-2501",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T07:13:07Z |
---
base_model: mistralai/Mistral-Small-24B-Instruct-2501
library_name: transformers
model_name: dpo_mistral_namo_01
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo_mistral_namo_01
This model is a fine-tuned version of [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/2995541719-huazhong-university-of-science-and-technology/huggingface/runs/n1ndzavx)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Prosho/sentinel-src-24
|
Prosho
| 2025-09-18T07:24:47Z | 14 | 1 |
transformers
|
[
"transformers",
"SENTINEL-SRC-MQM",
"translation",
"multilingual",
"arxiv:2508.10175",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-05-03T18:14:39Z |
---
pipeline_tag: translation
language: multilingual
library_name: transformers
base_model:
- FacebookAI/xlm-roberta-large
license: apache-2.0
---
<div align="center">
<h1 style="font-family: 'Arial', sans-serif; font-size: 28px; font-weight: bold; color: black;">
📊 Estimating Machine Translation Difficulty
</h1>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://arxiv.org/abs/2508.10175"><img src="https://img.shields.io/badge/arXiv-2508.10175-b31b1b.svg"></a>
<a href="https://huggingface.co/collections/Prosho/translation-difficulty-estimators-6816665c008e1d22426eb6c4"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Collection-FCD21D"></a>
</div>
This repository contains one of the two **SENTINEL<sub>SRC</sub>** metric models analyzed in our paper **Estimating Machine Translation Difficulty**.
## Usage
To run this model, install the following git repository:
```bash
pip install git+https://github.com/prosho-97/guardians-mt-eval
```
After that, you can use this model within Python in the following way:
```python
from sentinel_metric import download_model, load_from_checkpoint
model_path = download_model("Prosho/sentinel-src-24")
model = load_from_checkpoint(model_path)
data = [
{"src": "Please sign the form."},
{"src": "He spilled the beans, then backpedaled—talk about mixed signals!"}
]
output = model.predict(data, batch_size=8, gpus=1)
```
Output:
```python
# Segment scores
>>> output.scores
[0.5726182460784912, -0.12408381700515747]
# System score
>>> output.system_score
0.22426721453666687
```
Where the higher the output score, the easier it is to translate the input source text.
## Cite this work
This work has been accepted at [EMNLP 2025](https://2025.emnlp.org/). If you use any part, please consider citing our paper as follows:
```bibtex
@misc{proietti2025estimatingmachinetranslationdifficulty,
title={Estimating Machine Translation Difficulty},
author={Lorenzo Proietti and Stefano Perrella and Vilém Zouhar and Roberto Navigli and Tom Kocmi},
year={2025},
eprint={2508.10175},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10175},
}
```
|
Mikecantdothis/ppo-LunarLander-v3
|
Mikecantdothis
| 2025-09-18T07:24:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-18T07:23:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 275.12 +/- 19.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
miromind-ai/MiroThinker-8B-SFT-v0.1
|
miromind-ai
| 2025-09-18T07:22:48Z | 202 | 16 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"open-source",
"miromind",
"conversational",
"en",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T07:17:14Z |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-8B
tags:
- agent
- open-source
- miromind
new_version: miromind-ai/MiroThinker-8B-SFT-v0.2
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>
<!-- <hr> -->
<div align="center">
[](https://dr.miromind.ai/)
[](https://huggingface.co/collections/miromind-ai/mirothinker-v01-689301b6d0563321862d44a1)
[](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1)
[](https://miromind.ai/blog/miromind-research-agent)
[](https://github.com/MiroMindAI/MiroThinker)
[](https://discord.com/invite/GPqEnkzQZd)
[](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png)
[](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239)
[](https://miromind.ai/)
</div>
## Introduction
MiroThinker is an open-source agentic model series built on top of Qwen3. Designed for deep research and complex, long-horizon problem solving, it integrates strong capabilities in task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, making it suitable for a wide range of real-world applications.
We have released the MiroThinker-v0.1 series, including both SFT and DPO variants at parameter scales of 8B, 14B, and 32B. Notably, MiroThinker v0.1 achieves state-of-the-art performance among open-source models on the [GAIA benchmark](https://huggingface.co/datasets/gaia-benchmark/GAIA), a rigorous evaluation suite for advanced agentic capabilities, demonstrating its strength in long-context, decision-intensive, and real-world task scenarios.
## Online Demo
Welcome to try out our online demo [here](https://dr.miromind.ai/).
## Performance
> [!IMPORTANT]
> <div>
> To prevent data leakage during searches, we block Hugging Face domains to ensure the model doesn't access answers through shortcuts.
> </div>
### GAIA Benchmark
| **Method** | Text-103<br>Best Pass@1 | Text-103<br>Pass@1 (Avg@8) | Val-165<br>Best Pass@1 | Val-165<br>Pass@1 (Avg@8) |
| ----------------------------------------------------------------- | :--: | :--: | :--: | :--: |
| Search-o1-7B | 17.5 | - | - | - |
| R1-Searcher-7B | 20.4 | - | - | - |
| WebDancer-7B | 31.0 | - | - | - |
| WebSailor-7B | 37.9 | - | - | - |
| CK-Pro-8B | 43.7 | - | 35.2 | - |
| MiroThinker-8B-SFT-v0.1 | 44.7 | 40.1 | 34.6 | 31.8 |
| + Commercial Tools | 46.6 | 42.1 | 37.6 | 33.9 |
| MiroThinker-8B-DPO-v0.1 | 46.6 | 44.8 | 37.0 | 35.4 |
| + Commercial Tools | 50.5 | 46.7 | 38.2 | 35.9 |
| | | | | |
| Search-o1-32B | 28.2 | - | - | - |
| WebThinker-32B-RL | 48.5 | - | - | - |
| WebDancer-QwQ-32B | 51.5 | - | - | - |
| WebSailor-32B | 53.2 | - | - | - |
| WebShaper-QwQ-32B | 53.3 | - | - | - |
| WebShaper-72B | 60.1 | - | - | - |
| MiroThinker-14B-SFT-v0.1 | 47.6 | 44.4 | 37.0 | 34.4 |
| + Commercial Tools | 49.5 | 47.5 | 41.8 | 39.8 |
| MiroThinker-14B-DPO-v0.1 | 48.5 | 46.6 | 42.4 | 39.2 |
| + Commercial Tools | 52.4 | 48.5 | 45.5 | 42.0 |
| MiroThinker-32B-SFT-v0.1 | 55.3 | 51.3 | 44.9 | 42.7 |
| + Commercial Tools | 58.3 | 54.2 | 48.5 | 45.8 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 57.3 | 54.1 | 48.5 | 45.9 |
| + Commercial Tools | **60.2** | **57.9** | **50.9** | **48.9** |
1. Following the practices of WebThinker, WebAgents, and CognitiveKernel, we report the Best Pass@1, the highest score across three runs, which often reflects stronger performance, though it may exhibit some variability. To provide a more stable measure, we additionally report Pass@1 (Avg@8), which offers greater consistency at the cost of slightly lower scores.
2. For consistency with prior open-source works, we evaluate GAIA-Text-103 using the WebAgents LLM-as-judge template, and report results on GAIA-Val-165 using the official GAIA scorer script.
3. By default, we use open-source tools wherever possible, except for the code tool [E2B](https://github.com/e2b-dev/E2B) and the Google search tool [Serper](https://serper.dev/). We use [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo), [Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct), and [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) in our implementation. The framework can be easily extended to other open-source tools of your choice.
4. Commercial tools were mainly used for multimodal capabilities and certain complex reasoning subtasks. The majority of tasks, including planning, browsing, refinement, navigation, and more, were handled by our models.
### More Benchmarks
| Method | HLE<br>Pass@1 | Frames<br>Pass@1 | BrowseComp<br>Pass@1 | <span style="white-space:nowrap;">BrowseComp-ZH</span><br>Pass@1 | WebWalkerQA<br>Pass@1 |
|-------------------------------------------------------------------|:-------------:|:----------------:|:--------------------:|:----------------------------------------------------------------:|:---------------------:|
| OpenAI Deep Research | 26.6 | - | 51.5 | 42.9 | - |
| Gemini Deep Research | 26.9 | - | - | - | - |
| Kimi-Researcher | 26.9 | 78.8 | - | - | - |
| | | | | | |
| WebDancer-7B | - | - | - | - | 36.0 |
| WebSailor-7B | - | - | 6.7 | 14.2 | - |
| MiroThinker-8B-SFT-v0.1 | - | 58.0 | 5.5 | 9.3 | 41.3 |
| MiroThinker-8B-DPO-v0.1 | - | 64.4 | 8.7 | 13.5 | 45.7 |
| | | | | | |
| WebThinker-32B-RL | - | - | - | - | 46.5 |
| WebDancer-QwQ-32B | - | - | 3.8 | 18.0 | 47.9 |
| WebSailor-32B | - | - | 10.5 | 25.5 | - |
| WebShaper-32B | - | - | - | - | 51.4 |
| MiroThinker-32B-SFT-v0.1 | 10.2 | 70.4 | 10.6 | 13.8 | 45.7 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 11.8 | 71.7 | 13.0 | 17.0 | 49.3 |
1. MiroThinker’s performance was tested with [this repository](https://github.com/MiroMindAI/MiroThinker) and open-source tools; other models’ results are from their papers and official sites.
2. As [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1) mainly contains English data, the model’s Chinese capability is limited. We plan to add more Chinese data in the next version.
## Quick Start
MiroThinker-v0.1 is trained on our large-scale, high-quality trajectory and preference datasets [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1), utilizing the efficient training framework [MiroTrain](https://github.com/MiroMindAI/MiroTrain), and enhanced with tool-use capabilities through our agentic framework [MiroFlow](https://github.com/MiroMindAI/MiroFlow).
To promote reproducibility and benefit the community, we decided to open-source the entire suite mentioned above. For more technical details, evaluation results, and usage tutorials, please visit our [GitHub repository](https://github.com/MiroMindAI/MiroThinker).
## License
MiroThinker-v0.1 is licensed under Apache 2.0.
## Contact Us
MiroThinker is developed by the MiroMind Foundation Model Team.
If you would like to leave us a message, feel free to get in touch.
In addition to [GitHub](https://github.com/MiroMindAI/),
[Discord](https://discord.com/invite/GPqEnkzQZd),
[WeChat](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png),
and [RedNote](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239),
you can also reach us via email at [email protected].
|
miromind-ai/MiroThinker-14B-SFT-v0.1
|
miromind-ai
| 2025-09-18T07:22:27Z | 26 | 11 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"open-source",
"miromind",
"conversational",
"en",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T07:19:11Z |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-14B
tags:
- agent
- open-source
- miromind
new_version: miromind-ai/MiroThinker-14B-SFT-v0.2
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>
<!-- <hr> -->
<div align="center">
[](https://dr.miromind.ai/)
[](https://huggingface.co/collections/miromind-ai/mirothinker-v01-689301b6d0563321862d44a1)
[](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1)
[](https://miromind.ai/blog/miromind-research-agent)
[](https://github.com/MiroMindAI/MiroThinker)
[](https://discord.com/invite/GPqEnkzQZd)
[](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png)
[](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239)
[](https://miromind.ai/)
</div>
## Introduction
MiroThinker is an open-source agentic model series built on top of Qwen3. Designed for deep research and complex, long-horizon problem solving, it integrates strong capabilities in task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, making it suitable for a wide range of real-world applications.
We have released the MiroThinker-v0.1 series, including both SFT and DPO variants at parameter scales of 8B, 14B, and 32B. Notably, MiroThinker v0.1 achieves state-of-the-art performance among open-source models on the [GAIA benchmark](https://huggingface.co/datasets/gaia-benchmark/GAIA), a rigorous evaluation suite for advanced agentic capabilities, demonstrating its strength in long-context, decision-intensive, and real-world task scenarios.
## Online Demo
Welcome to try out our online demo [here](https://dr.miromind.ai/).
## Performance
> [!IMPORTANT]
> <div>
> To prevent data leakage during searches, we block Hugging Face domains to ensure the model doesn't access answers through shortcuts.
> </div>
### GAIA Benchmark
| **Method** | Text-103<br>Best Pass@1 | Text-103<br>Pass@1 (Avg@8) | Val-165<br>Best Pass@1 | Val-165<br>Pass@1 (Avg@8) |
| ----------------------------------------------------------------- | :--: | :--: | :--: | :--: |
| Search-o1-7B | 17.5 | - | - | - |
| R1-Searcher-7B | 20.4 | - | - | - |
| WebDancer-7B | 31.0 | - | - | - |
| WebSailor-7B | 37.9 | - | - | - |
| CK-Pro-8B | 43.7 | - | 35.2 | - |
| MiroThinker-8B-SFT-v0.1 | 44.7 | 40.1 | 34.6 | 31.8 |
| + Commercial Tools | 46.6 | 42.1 | 37.6 | 33.9 |
| MiroThinker-8B-DPO-v0.1 | 46.6 | 44.8 | 37.0 | 35.4 |
| + Commercial Tools | 50.5 | 46.7 | 38.2 | 35.9 |
| | | | | |
| Search-o1-32B | 28.2 | - | - | - |
| WebThinker-32B-RL | 48.5 | - | - | - |
| WebDancer-QwQ-32B | 51.5 | - | - | - |
| WebSailor-32B | 53.2 | - | - | - |
| WebShaper-QwQ-32B | 53.3 | - | - | - |
| WebShaper-72B | 60.1 | - | - | - |
| MiroThinker-14B-SFT-v0.1 | 47.6 | 44.4 | 37.0 | 34.4 |
| + Commercial Tools | 49.5 | 47.5 | 41.8 | 39.8 |
| MiroThinker-14B-DPO-v0.1 | 48.5 | 46.6 | 42.4 | 39.2 |
| + Commercial Tools | 52.4 | 48.5 | 45.5 | 42.0 |
| MiroThinker-32B-SFT-v0.1 | 55.3 | 51.3 | 44.9 | 42.7 |
| + Commercial Tools | 58.3 | 54.2 | 48.5 | 45.8 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 57.3 | 54.1 | 48.5 | 45.9 |
| + Commercial Tools | **60.2** | **57.9** | **50.9** | **48.9** |
1. Following the practices of WebThinker, WebAgents, and CognitiveKernel, we report the Best Pass@1, the highest score across three runs, which often reflects stronger performance, though it may exhibit some variability. To provide a more stable measure, we additionally report Pass@1 (Avg@8), which offers greater consistency at the cost of slightly lower scores.
2. For consistency with prior open-source works, we evaluate GAIA-Text-103 using the WebAgents LLM-as-judge template, and report results on GAIA-Val-165 using the official GAIA scorer script.
3. By default, we use open-source tools wherever possible, except for the code tool [E2B](https://github.com/e2b-dev/E2B) and the Google search tool [Serper](https://serper.dev/). We use [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo), [Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct), and [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) in our implementation. The framework can be easily extended to other open-source tools of your choice.
4. Commercial tools were mainly used for multimodal capabilities and certain complex reasoning subtasks. The majority of tasks, including planning, browsing, refinement, navigation, and more, were handled by our models.
### More Benchmarks
| Method | HLE<br>Pass@1 | Frames<br>Pass@1 | BrowseComp<br>Pass@1 | <span style="white-space:nowrap;">BrowseComp-ZH</span><br>Pass@1 | WebWalkerQA<br>Pass@1 |
|-------------------------------------------------------------------|:-------------:|:----------------:|:--------------------:|:----------------------------------------------------------------:|:---------------------:|
| OpenAI Deep Research | 26.6 | - | 51.5 | 42.9 | - |
| Gemini Deep Research | 26.9 | - | - | - | - |
| Kimi-Researcher | 26.9 | 78.8 | - | - | - |
| | | | | | |
| WebDancer-7B | - | - | - | - | 36.0 |
| WebSailor-7B | - | - | 6.7 | 14.2 | - |
| MiroThinker-8B-SFT-v0.1 | - | 58.0 | 5.5 | 9.3 | 41.3 |
| MiroThinker-8B-DPO-v0.1 | - | 64.4 | 8.7 | 13.5 | 45.7 |
| | | | | | |
| WebThinker-32B-RL | - | - | - | - | 46.5 |
| WebDancer-QwQ-32B | - | - | 3.8 | 18.0 | 47.9 |
| WebSailor-32B | - | - | 10.5 | 25.5 | - |
| WebShaper-32B | - | - | - | - | 51.4 |
| MiroThinker-32B-SFT-v0.1 | 10.2 | 70.4 | 10.6 | 13.8 | 45.7 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 11.8 | 71.7 | 13.0 | 17.0 | 49.3 |
1. MiroThinker’s performance was tested with [this repository](https://github.com/MiroMindAI/MiroThinker) and open-source tools; other models’ results are from their papers and official sites.
2. As [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1) mainly contains English data, the model’s Chinese capability is limited. We plan to add more Chinese data in the next version.
## Quick Start
MiroThinker-v0.1 is trained on our large-scale, high-quality trajectory and preference datasets [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1), utilizing the efficient training framework [MiroTrain](https://github.com/MiroMindAI/MiroTrain), and enhanced with tool-use capabilities through our agentic framework [MiroFlow](https://github.com/MiroMindAI/MiroFlow).
To promote reproducibility and benefit the community, we decided to open-source the entire suite mentioned above. For more technical details, evaluation results, and usage tutorials, please visit our [GitHub repository](https://github.com/MiroMindAI/MiroThinker).
## License
MiroThinker-v0.1 is licensed under Apache 2.0.
## Contact Us
MiroThinker is developed by the MiroMind Foundation Model Team.
If you would like to leave us a message, feel free to get in touch.
In addition to [GitHub](https://github.com/MiroMindAI/),
[Discord](https://discord.com/invite/GPqEnkzQZd),
[WeChat](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png),
and [RedNote](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239),
you can also reach us via email at [email protected].
|
miromind-ai/MiroThinker-14B-DPO-v0.1
|
miromind-ai
| 2025-09-18T07:22:19Z | 27 | 13 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"open-source",
"miromind",
"conversational",
"en",
"base_model:miromind-ai/MiroThinker-14B-SFT-v0.1",
"base_model:finetune:miromind-ai/MiroThinker-14B-SFT-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T07:19:25Z |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- miromind-ai/MiroThinker-14B-SFT-v0.1
tags:
- agent
- open-source
- miromind
new_version: miromind-ai/MiroThinker-14B-DPO-v0.2
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>
<!-- <hr> -->
<div align="center">
[](https://dr.miromind.ai/)
[](https://huggingface.co/collections/miromind-ai/mirothinker-v01-689301b6d0563321862d44a1)
[](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1)
[](https://miromind.ai/blog/miromind-research-agent)
[](https://github.com/MiroMindAI/MiroThinker)
[](https://discord.com/invite/GPqEnkzQZd)
[](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png)
[](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239)
[](https://miromind.ai/)
</div>
## Introduction
MiroThinker is an open-source agentic model series built on top of Qwen3. Designed for deep research and complex, long-horizon problem solving, it integrates strong capabilities in task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, making it suitable for a wide range of real-world applications.
We have released the MiroThinker-v0.1 series, including both SFT and DPO variants at parameter scales of 8B, 14B, and 32B. Notably, MiroThinker v0.1 achieves state-of-the-art performance among open-source models on the [GAIA benchmark](https://huggingface.co/datasets/gaia-benchmark/GAIA), a rigorous evaluation suite for advanced agentic capabilities, demonstrating its strength in long-context, decision-intensive, and real-world task scenarios.
## Online Demo
Welcome to try out our online demo [here](https://dr.miromind.ai/).
## Performance
> [!IMPORTANT]
> <div>
> To prevent data leakage during searches, we block Hugging Face domains to ensure the model doesn't access answers through shortcuts.
> </div>
### GAIA Benchmark
| **Method** | Text-103<br>Best Pass@1 | Text-103<br>Pass@1 (Avg@8) | Val-165<br>Best Pass@1 | Val-165<br>Pass@1 (Avg@8) |
| ----------------------------------------------------------------- | :--: | :--: | :--: | :--: |
| Search-o1-7B | 17.5 | - | - | - |
| R1-Searcher-7B | 20.4 | - | - | - |
| WebDancer-7B | 31.0 | - | - | - |
| WebSailor-7B | 37.9 | - | - | - |
| CK-Pro-8B | 43.7 | - | 35.2 | - |
| MiroThinker-8B-SFT-v0.1 | 44.7 | 40.1 | 34.6 | 31.8 |
| + Commercial Tools | 46.6 | 42.1 | 37.6 | 33.9 |
| MiroThinker-8B-DPO-v0.1 | 46.6 | 44.8 | 37.0 | 35.4 |
| + Commercial Tools | 50.5 | 46.7 | 38.2 | 35.9 |
| | | | | |
| Search-o1-32B | 28.2 | - | - | - |
| WebThinker-32B-RL | 48.5 | - | - | - |
| WebDancer-QwQ-32B | 51.5 | - | - | - |
| WebSailor-32B | 53.2 | - | - | - |
| WebShaper-QwQ-32B | 53.3 | - | - | - |
| WebShaper-72B | 60.1 | - | - | - |
| MiroThinker-14B-SFT-v0.1 | 47.6 | 44.4 | 37.0 | 34.4 |
| + Commercial Tools | 49.5 | 47.5 | 41.8 | 39.8 |
| MiroThinker-14B-DPO-v0.1 | 48.5 | 46.6 | 42.4 | 39.2 |
| + Commercial Tools | 52.4 | 48.5 | 45.5 | 42.0 |
| MiroThinker-32B-SFT-v0.1 | 55.3 | 51.3 | 44.9 | 42.7 |
| + Commercial Tools | 58.3 | 54.2 | 48.5 | 45.8 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 57.3 | 54.1 | 48.5 | 45.9 |
| + Commercial Tools | **60.2** | **57.9** | **50.9** | **48.9** |
1. Following the practices of WebThinker, WebAgents, and CognitiveKernel, we report the Best Pass@1, the highest score across three runs, which often reflects stronger performance, though it may exhibit some variability. To provide a more stable measure, we additionally report Pass@1 (Avg@8), which offers greater consistency at the cost of slightly lower scores.
2. For consistency with prior open-source works, we evaluate GAIA-Text-103 using the WebAgents LLM-as-judge template, and report results on GAIA-Val-165 using the official GAIA scorer script.
3. By default, we use open-source tools wherever possible, except for the code tool [E2B](https://github.com/e2b-dev/E2B) and the Google search tool [Serper](https://serper.dev/). We use [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo), [Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct), and [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) in our implementation. The framework can be easily extended to other open-source tools of your choice.
4. Commercial tools were mainly used for multimodal capabilities and certain complex reasoning subtasks. The majority of tasks, including planning, browsing, refinement, navigation, and more, were handled by our models.
### More Benchmarks
| Method | HLE<br>Pass@1 | Frames<br>Pass@1 | BrowseComp<br>Pass@1 | <span style="white-space:nowrap;">BrowseComp-ZH</span><br>Pass@1 | WebWalkerQA<br>Pass@1 |
|-------------------------------------------------------------------|:-------------:|:----------------:|:--------------------:|:----------------------------------------------------------------:|:---------------------:|
| OpenAI Deep Research | 26.6 | - | 51.5 | 42.9 | - |
| Gemini Deep Research | 26.9 | - | - | - | - |
| Kimi-Researcher | 26.9 | 78.8 | - | - | - |
| | | | | | |
| WebDancer-7B | - | - | - | - | 36.0 |
| WebSailor-7B | - | - | 6.7 | 14.2 | - |
| MiroThinker-8B-SFT-v0.1 | - | 58.0 | 5.5 | 9.3 | 41.3 |
| MiroThinker-8B-DPO-v0.1 | - | 64.4 | 8.7 | 13.5 | 45.7 |
| | | | | | |
| WebThinker-32B-RL | - | - | - | - | 46.5 |
| WebDancer-QwQ-32B | - | - | 3.8 | 18.0 | 47.9 |
| WebSailor-32B | - | - | 10.5 | 25.5 | - |
| WebShaper-32B | - | - | - | - | 51.4 |
| MiroThinker-32B-SFT-v0.1 | 10.2 | 70.4 | 10.6 | 13.8 | 45.7 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 11.8 | 71.7 | 13.0 | 17.0 | 49.3 |
1. MiroThinker’s performance was tested with [this repository](https://github.com/MiroMindAI/MiroThinker) and open-source tools; other models’ results are from their papers and official sites.
2. As [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1) mainly contains English data, the model’s Chinese capability is limited. We plan to add more Chinese data in the next version.
## Quick Start
MiroThinker-v0.1 is trained on our large-scale, high-quality trajectory and preference datasets [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1), utilizing the efficient training framework [MiroTrain](https://github.com/MiroMindAI/MiroTrain), and enhanced with tool-use capabilities through our agentic framework [MiroFlow](https://github.com/MiroMindAI/MiroFlow).
To promote reproducibility and benefit the community, we decided to open-source the entire suite mentioned above. For more technical details, evaluation results, and usage tutorials, please visit our [GitHub repository](https://github.com/MiroMindAI/MiroThinker).
## License
MiroThinker-v0.1 is licensed under Apache 2.0.
## Contact Us
MiroThinker is developed by the MiroMind Foundation Model Team.
If you would like to leave us a message, feel free to get in touch.
In addition to [GitHub](https://github.com/MiroMindAI/),
[Discord](https://discord.com/invite/GPqEnkzQZd),
[WeChat](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png),
and [RedNote](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239),
you can also reach us via email at [email protected].
|
ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_cluster2_split_0_2048
|
ChenWu98
| 2025-09-18T07:20:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T07:17:00Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: transformers
model_name: numina_qwen_2.5_3b_sft_teachers_no_reasoning_cluster2_split_0_2048
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_3b_sft_teachers_no_reasoning_cluster2_split_0_2048
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/usa88jw7)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
miromind-ai/MiroThinker-4B-SFT-v0.2
|
miromind-ai
| 2025-09-18T07:20:20Z | 32 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"open-source",
"miromind",
"conversational",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T14:35:54Z |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B
tags:
- agent
- open-source
- miromind
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>
<!-- <hr> -->
<div align="center">
[](https://dr.miromind.ai/)
[](https://huggingface.co/collections/miromind-ai/mirothinker-v02-68af084a18035f57b17cd902)
[](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1)
[](https://miromind.ai/blog/miromind-research-agent)
[](https://github.com/MiroMindAI/MiroThinker)
[](https://discord.com/invite/GPqEnkzQZd)
[](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png)
[](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239)
[](https://miromind.ai/)
</div>
## Introduction
MiroThinker is an open-source agentic model series. Designed as a research agent for complex, long-horizon problem solving, it integrates strong capabilities in task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, enabling a wide range of real-world applications.
In MiroThinker-v0.2, we introduced three key improvements:
- **Richer training data** from both English and Chinese sources, yielding significant gains in benchmark performance and generalization.
- **Unified DPO training** with a single preference dataset across all models.
- **Extended context length** from 40k to 64k for more challenging multi-turn tool-use tasks.
Compared to v0.1, MiroThinker-v0.2 delivers consistent gains across benchmarks. For example, scores improved from **57.3 → 64.1** on **GAIA-Text-103** and from **17.0 → 29.4** on **BrowseComp-ZH**, reflecting substantial advancements in the model’s general research agent capabilities.
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_2.png" width="100%" alt="MiroThinker" />
</div>
## Online Demo
Welcome to try out our online demo [here](https://dr.miromind.ai/).
## Performance
> [!IMPORTANT]
> <div>
> To prevent data leakage during searches, we block Hugging Face domains to ensure the model doesn't access answers through shortcuts.
> </div>
### Comparison with SOTA Research Agents
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_0.png" width="100%" alt="MiroThinker" />
</div>
### GAIA Benchmark
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_1.png" width="100%" alt="MiroThinker" />
</div>
## Quick Start
MiroThinker-v0.2 is trained on our large-scale, high-quality trajectory and preference datasets MiroVerse-v0.2, utilizing the efficient training framework [MiroTrain](https://github.com/MiroMindAI/MiroTrain), and enhanced with tool-use capabilities through our agentic framework [MiroFlow](https://github.com/MiroMindAI/MiroFlow).
To promote reproducibility and benefit the community, we decided to open-source the entire suite mentioned above. For more technical details, evaluation results, and usage tutorials, please visit our [GitHub repository](https://github.com/MiroMindAI/MiroThinker).
## License
MiroThinker-v0.2 is licensed under Apache 2.0.
## Contact Us
MiroThinker is developed by the MiroMind Foundation Model Team.
If you would like to leave us a message, feel free to get in touch.
In addition to [GitHub](https://github.com/MiroMindAI/),
[Discord](https://discord.com/invite/GPqEnkzQZd),
[WeChat](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png),
and [RedNote](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239),
you can also reach us via email at [email protected].
|
miromind-ai/MiroThinker-8B-SFT-v0.2
|
miromind-ai
| 2025-09-18T07:20:04Z | 27 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"open-source",
"miromind",
"conversational",
"en",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T14:35:26Z |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-8B
tags:
- agent
- open-source
- miromind
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>
<!-- <hr> -->
<div align="center">
[](https://dr.miromind.ai/)
[](https://huggingface.co/collections/miromind-ai/mirothinker-v02-68af084a18035f57b17cd902)
[](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1)
[](https://miromind.ai/blog/miromind-research-agent)
[](https://github.com/MiroMindAI/MiroThinker)
[](https://discord.com/invite/GPqEnkzQZd)
[](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png)
[](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239)
[](https://miromind.ai/)
</div>
## Introduction
MiroThinker is an open-source agentic model series. Designed as a research agent for complex, long-horizon problem solving, it integrates strong capabilities in task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, enabling a wide range of real-world applications.
In MiroThinker-v0.2, we introduced three key improvements:
- **Richer training data** from both English and Chinese sources, yielding significant gains in benchmark performance and generalization.
- **Unified DPO training** with a single preference dataset across all models.
- **Extended context length** from 40k to 64k for more challenging multi-turn tool-use tasks.
Compared to v0.1, MiroThinker-v0.2 delivers consistent gains across benchmarks. For example, scores improved from **57.3 → 64.1** on **GAIA-Text-103** and from **17.0 → 29.4** on **BrowseComp-ZH**, reflecting substantial advancements in the model’s general research agent capabilities.
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_2.png" width="100%" alt="MiroThinker" />
</div>
## Online Demo
Welcome to try out our online demo [here](https://dr.miromind.ai/).
## Performance
> [!IMPORTANT]
> <div>
> To prevent data leakage during searches, we block Hugging Face domains to ensure the model doesn't access answers through shortcuts.
> </div>
### Comparison with SOTA Research Agents
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_0.png" width="100%" alt="MiroThinker" />
</div>
### GAIA Benchmark
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_1.png" width="100%" alt="MiroThinker" />
</div>
## Quick Start
MiroThinker-v0.2 is trained on our large-scale, high-quality trajectory and preference datasets MiroVerse-v0.2, utilizing the efficient training framework [MiroTrain](https://github.com/MiroMindAI/MiroTrain), and enhanced with tool-use capabilities through our agentic framework [MiroFlow](https://github.com/MiroMindAI/MiroFlow).
To promote reproducibility and benefit the community, we decided to open-source the entire suite mentioned above. For more technical details, evaluation results, and usage tutorials, please visit our [GitHub repository](https://github.com/MiroMindAI/MiroThinker).
## License
MiroThinker-v0.2 is licensed under Apache 2.0.
## Contact Us
MiroThinker is developed by the MiroMind Foundation Model Team.
If you would like to leave us a message, feel free to get in touch.
In addition to [GitHub](https://github.com/MiroMindAI/),
[Discord](https://discord.com/invite/GPqEnkzQZd),
[WeChat](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png),
and [RedNote](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239),
you can also reach us via email at [email protected].
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758179914
|
schooncestiaa
| 2025-09-18T07:19:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T07:19:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
miromind-ai/MiroThinker-14B-SFT-v0.2
|
miromind-ai
| 2025-09-18T07:19:26Z | 24 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"open-source",
"miromind",
"conversational",
"en",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T14:35:40Z |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-14B
tags:
- agent
- open-source
- miromind
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>
<!-- <hr> -->
<div align="center">
[](https://dr.miromind.ai/)
[](https://huggingface.co/collections/miromind-ai/mirothinker-v02-68af084a18035f57b17cd902)
[](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1)
[](https://miromind.ai/blog/miromind-research-agent)
[](https://github.com/MiroMindAI/MiroThinker)
[](https://discord.com/invite/GPqEnkzQZd)
[](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png)
[](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239)
[](https://miromind.ai/)
</div>
## Introduction
MiroThinker is an open-source agentic model series. Designed as a research agent for complex, long-horizon problem solving, it integrates strong capabilities in task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, enabling a wide range of real-world applications.
In MiroThinker-v0.2, we introduced three key improvements:
- **Richer training data** from both English and Chinese sources, yielding significant gains in benchmark performance and generalization.
- **Unified DPO training** with a single preference dataset across all models.
- **Extended context length** from 40k to 64k for more challenging multi-turn tool-use tasks.
Compared to v0.1, MiroThinker-v0.2 delivers consistent gains across benchmarks. For example, scores improved from **57.3 → 64.1** on **GAIA-Text-103** and from **17.0 → 29.4** on **BrowseComp-ZH**, reflecting substantial advancements in the model’s general research agent capabilities.
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_2.png" width="100%" alt="MiroThinker" />
</div>
## Online Demo
Welcome to try out our online demo [here](https://dr.miromind.ai/).
## Performance
> [!IMPORTANT]
> <div>
> To prevent data leakage during searches, we block Hugging Face domains to ensure the model doesn't access answers through shortcuts.
> </div>
### Comparison with SOTA Research Agents
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_0.png" width="100%" alt="MiroThinker" />
</div>
### GAIA Benchmark
<div>
<img src="https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/MiroThinker_v0.2_Performance_1.png" width="100%" alt="MiroThinker" />
</div>
## Quick Start
MiroThinker-v0.2 is trained on our large-scale, high-quality trajectory and preference datasets MiroVerse-v0.2, utilizing the efficient training framework [MiroTrain](https://github.com/MiroMindAI/MiroTrain), and enhanced with tool-use capabilities through our agentic framework [MiroFlow](https://github.com/MiroMindAI/MiroFlow).
To promote reproducibility and benefit the community, we decided to open-source the entire suite mentioned above. For more technical details, evaluation results, and usage tutorials, please visit our [GitHub repository](https://github.com/MiroMindAI/MiroThinker).
## License
MiroThinker-v0.2 is licensed under Apache 2.0.
## Contact Us
MiroThinker is developed by the MiroMind Foundation Model Team.
If you would like to leave us a message, feel free to get in touch.
In addition to [GitHub](https://github.com/MiroMindAI/),
[Discord](https://discord.com/invite/GPqEnkzQZd),
[WeChat](https://huggingface.co/datasets/miromind-ai/MiroFlow-Benchmarks/resolve/main/assets/wechat.png),
and [RedNote](https://www.xiaohongshu.com/user/profile/5e353bd80000000001000239),
you can also reach us via email at [email protected].
|
haongn/business_license_clf-4-scale-0.5-1.0
|
haongn
| 2025-09-18T07:16:21Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"efficientnet",
"image-classification",
"generated_from_trainer",
"base_model:google/efficientnet-b0",
"base_model:finetune:google/efficientnet-b0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-09-18T06:53:22Z |
---
library_name: transformers
license: apache-2.0
base_model: google/efficientnet-b0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: business_license_clf-4-scale-0.5-1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# business_license_clf-4-scale-0.5-1.0
This model is a fine-tuned version of [google/efficientnet-b0](https://huggingface.co/google/efficientnet-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0207
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 0.6505 | 0.6207 |
| 0.6483 | 2.0 | 12 | 0.5842 | 0.8046 |
| 0.6483 | 3.0 | 18 | 0.5401 | 0.8046 |
| 0.5986 | 4.0 | 24 | 0.4790 | 0.8506 |
| 0.4847 | 5.0 | 30 | 0.4355 | 0.9195 |
| 0.4847 | 6.0 | 36 | 0.3172 | 0.8851 |
| 0.3688 | 7.0 | 42 | 0.2810 | 0.9425 |
| 0.3688 | 8.0 | 48 | 0.2401 | 0.9425 |
| 0.2409 | 9.0 | 54 | 0.2387 | 0.9080 |
| 0.1902 | 10.0 | 60 | 0.1450 | 0.9770 |
| 0.1902 | 11.0 | 66 | 0.1108 | 0.9770 |
| 0.1332 | 12.0 | 72 | 0.1246 | 1.0 |
| 0.1332 | 13.0 | 78 | 0.0944 | 0.9885 |
| 0.1056 | 14.0 | 84 | 0.1437 | 0.9540 |
| 0.1005 | 15.0 | 90 | 0.0639 | 0.9885 |
| 0.1005 | 16.0 | 96 | 0.0822 | 0.9770 |
| 0.069 | 17.0 | 102 | 0.0613 | 0.9885 |
| 0.069 | 18.0 | 108 | 0.0448 | 1.0 |
| 0.0573 | 19.0 | 114 | 0.1275 | 0.9655 |
| 0.0603 | 20.0 | 120 | 0.0425 | 1.0 |
| 0.0603 | 21.0 | 126 | 0.0457 | 0.9885 |
| 0.0599 | 22.0 | 132 | 0.0636 | 0.9770 |
| 0.0599 | 23.0 | 138 | 0.0416 | 1.0 |
| 0.0409 | 24.0 | 144 | 0.0533 | 1.0 |
| 0.0611 | 25.0 | 150 | 0.1619 | 0.9655 |
| 0.0611 | 26.0 | 156 | 0.0533 | 0.9770 |
| 0.0454 | 27.0 | 162 | 0.0309 | 1.0 |
| 0.0454 | 28.0 | 168 | 0.0387 | 0.9885 |
| 0.0337 | 29.0 | 174 | 0.0949 | 0.9655 |
| 0.0556 | 30.0 | 180 | 0.0485 | 0.9655 |
| 0.0556 | 31.0 | 186 | 0.0285 | 1.0 |
| 0.0429 | 32.0 | 192 | 0.0283 | 1.0 |
| 0.0429 | 33.0 | 198 | 0.0385 | 1.0 |
| 0.0392 | 34.0 | 204 | 0.0355 | 0.9770 |
| 0.0466 | 35.0 | 210 | 0.0885 | 0.9655 |
| 0.0466 | 36.0 | 216 | 0.0196 | 1.0 |
| 0.0298 | 37.0 | 222 | 0.0258 | 1.0 |
| 0.0298 | 38.0 | 228 | 0.0269 | 1.0 |
| 0.0285 | 39.0 | 234 | 0.0254 | 1.0 |
| 0.0482 | 40.0 | 240 | 0.0290 | 0.9770 |
| 0.0482 | 41.0 | 246 | 0.0269 | 0.9885 |
| 0.0341 | 42.0 | 252 | 0.0297 | 0.9885 |
| 0.0341 | 43.0 | 258 | 0.1727 | 0.9655 |
| 0.0214 | 44.0 | 264 | 0.0211 | 1.0 |
| 0.0219 | 45.0 | 270 | 0.0332 | 0.9770 |
| 0.0219 | 46.0 | 276 | 0.0236 | 1.0 |
| 0.0315 | 47.0 | 282 | 0.0391 | 0.9770 |
| 0.0315 | 48.0 | 288 | 0.0201 | 1.0 |
| 0.0326 | 49.0 | 294 | 0.0229 | 1.0 |
| 0.0367 | 50.0 | 300 | 0.0207 | 1.0 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
FinancialSupport/gpt-oss-120b-lora-8xh100
|
FinancialSupport
| 2025-09-18T07:15:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T05:43:29Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-120b-lora-8xh100
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-120b-lora-8xh100
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FinancialSupport/gpt-oss-120b-lora-8xh100", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gsjang/ja-llama-3-swallow-8b-instruct-v0.1-x-meta-llama-3-8b-instruct-lcmb_merge
|
gsjang
| 2025-09-18T07:11:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1",
"base_model:merge:tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T07:05:15Z |
---
base_model:
- tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# ja-llama-3-swallow-8b-instruct-v0.1-x-meta-llama-3-8b-instruct-lcmb_merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the LCMB-Merge (KV Barycentric + Unbalanced OT) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
tokenizer:
source: union
merge_method: lcmb_merge
base_model: meta-llama/Meta-Llama-3-8B-Instruct
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters: {}
- model: tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1
parameters: {}
parameters: {}
write_readme: README.md
```
|
BishopBom/blockassist
|
BishopBom
| 2025-09-18T07:08:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"crested hardy caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T14:46:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- crested hardy caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Muse-Picaro-12B-0.5-v2-GGUF
|
mradermacher
| 2025-09-18T07:06:31Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:AliCat2/Muse-Picaro-12B-0.5-v2",
"base_model:quantized:AliCat2/Muse-Picaro-12B-0.5-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-18T06:02:56Z |
---
base_model: AliCat2/Muse-Picaro-12B-0.5-v2
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/AliCat2/Muse-Picaro-12B-0.5-v2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Muse-Picaro-12B-0.5-v2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Muse-Picaro-12B-0.5-v2-GGUF/resolve/main/Muse-Picaro-12B-0.5-v2.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KGolden9/menu18_k10
|
KGolden9
| 2025-09-18T07:03:15Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T06:57:37Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/Dolphin-Thinker-Preview-i1-GGUF
|
mradermacher
| 2025-09-18T07:00:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"reasoning",
"uncensored",
"en",
"base_model:Daemontatox/Dolphin-Thinker-Preview",
"base_model:quantized:Daemontatox/Dolphin-Thinker-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-18T05:11:54Z |
---
base_model: Daemontatox/Dolphin-Thinker-Preview
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- reasoning
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Daemontatox/Dolphin-Thinker-Preview
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Dolphin-Thinker-Preview-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-Thinker-Preview-i1-GGUF/resolve/main/Dolphin-Thinker-Preview.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Revilo7/so101-trained-policy-v3
|
Revilo7
| 2025-09-18T06:59:42Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Revilo7/my-so101-dataset-v3-consistent",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-18T06:59:29Z |
---
datasets: Revilo7/my-so101-dataset-v3-consistent
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
rainerspittel/nzism-v3.9-expert
|
rainerspittel
| 2025-09-18T06:59:07Z | 0 | 0 | null |
[
"license:bsd-2-clause",
"region:us"
] | null | 2025-09-18T06:59:07Z |
---
license: bsd-2-clause
---
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758178681
|
schooncestiaa
| 2025-09-18T06:59:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T06:59:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DoubleWow/legal-qwen-7b-lora
|
DoubleWow
| 2025-09-18T06:55:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-18T06:54:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cjkasbdkjnlakb/agent-0918-xml-test
|
cjkasbdkjnlakb
| 2025-09-18T06:55:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"lora",
"transformers",
"conversational",
"dataset:custom",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T06:54:59Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
- axolotl
- base_model:adapter:Qwen/Qwen3-4B-Instruct-2507
- lora
- transformers
datasets:
- custom
pipeline_tag: text-generation
model-index:
- name: checkpoints/0918-xml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.2`
```yaml
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
# 是否以 8-bit 精度加载模型
load_in_8bit: false
# 是否以 4-bit 精度加载模型(与QLoRA绑定, 强制使用)
load_in_4bit: false
# 是否严格匹配模型结构,关闭表示可加载少部分差异结构(如以适配 adapter)
# strict: false
base_model: Qwen/Qwen3-4B-Instruct-2507
# 数据集设置
chat_template: qwen3
datasets:
- path: /workspace/train_dir/tool_agent_train_data_xml_2000.json # - 表示列表(list)中的一项, 即可以同时使用多个数据集
type: chat_template # chat_template(自定义格式) alpaca
roles_to_train: ["assistant"]
field_messages: messages # 标识的字段
message_property_mappings: # message_property_mappings={'role':'role', 'content':'content'})
role: role
content: content
dataset_prepared_path:
val_set_size: 0.05
output_dir: checkpoints/0918-xml
sequence_len: 16384 # 模型所能处理的最大上下文长度(默认2048)
pad_to_sequence_len: true
# context_parallel_size: 2 # 长序列拆分至多个GPU(强制要求 mirco_batch_size: 1)
sample_packing: false # 在训练时将多个样本拼接(packing)成一个长序列(sequence_len)输入到模型中,以提高训练效率。
eval_sample_packing: false # 评估时拼接多个样本
# 训练超参数
adapter: lora # lora qlora
lora_model_dir:
lora_r: 16 # lora_r默认首选 16,平衡精度与显存
lora_alpha: 64 # 缩放系数,用于控制 LoRA 的影响力, 一般设为 2*r 或 4*r
lora_dropout: 0.05
lora_target_linear: true
micro_batch_size: 4 # 微批次大小 94G的H100可以设为4(Token为1w)
gradient_accumulation_steps: 8 # 梯度累积: 将多个微批次的梯度(micro_batch_size)累积起来,然后更新模型权重 有效 Batch 常取 16: 小于 8 训练会抖,大于 32 只会更耗时、收益有限
auto_find_batch_size: false # 允许Axolotl不断调整batch_size ⚠️Zero-3不适用
num_epochs: 1
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 2e-5
# bf16: auto + tf32: true,可获得更好的稳定性和性能。
bf16: auto
tf32: true
# early_stopping_patience:
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
# auto_resume_from_checkpoints: true #自动从output_dir寻找最新checkpoint断点恢复
logging_steps: 1
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
# deepspeed: /workspace/deepspeed_configs/zero2.json
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: true
# fsdp_offload_params: true
# fsdp_use_orig_params: false
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen3DecoderLayer
# fsdp_state_dict_type: FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# special_tokens:
# wandb_project:
# wandb_entity:
# wandb_watch:
# wandb_name:
# wandb_log_model:
```
</details><br>
# checkpoints/0918-xml
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the /workspace/train_dir/tool_agent_train_data_xml_2000.json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0809
- Memory/max Mem Active(gib): 134.02
- Memory/max Mem Allocated(gib): 134.02
- Memory/device Mem Reserved(gib): 137.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) |
|:-------------:|:------:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:|
| No log | 0 | 0 | 0.6616 | 103.1 | 103.1 | 103.76 |
| 0.143 | 0.2526 | 15 | 0.1444 | 134.02 | 134.02 | 135.41 |
| 0.0869 | 0.5053 | 30 | 0.0897 | 134.02 | 134.02 | 137.25 |
| 0.1212 | 0.7579 | 45 | 0.0821 | 134.02 | 134.02 | 137.25 |
| 0.127 | 1.0 | 60 | 0.0809 | 134.02 | 134.02 | 137.25 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
allelbhagya/fine-tune-sentiment
|
allelbhagya
| 2025-09-18T06:55:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-18T06:10:02Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tune-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5936
- Accuracy: 0.8233
- F1: 0.8317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0
- Datasets 4.1.0
- Tokenizers 0.22.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.